Re: Auto complete

2008-07-08 Thread daniel rosher
Hi,

This is how we implement our autocomplete feature, excerpt from
schema.xml

-First accept the input as is without alteration
-Lowercase the input, and eliminate all non a-z0-9 chars to normalize
the input
-split into multiple tokens with EdgeNGramFilterFactory upto a max of
100 chars, all starting from the beginning of the input, e.g. hello
becomes h,he,hel,hell,hello. 
-For queries we accept the first 20 chars.

Hope this helps.


fieldType name=autocomplete class=solr.TextField
analyzer type=index
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.LowerCaseFilterFactory /
filter class=solr.PatternReplaceFilterFactory
pattern=([^a-z0-9]) replacement= replace=all /
filter class=solr.EdgeNGramFilterFactory
maxGramSize=100 minGramSize=1 /
/analyzer
analyzer type=query
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.LowerCaseFilterFactory /
filter class=solr.PatternReplaceFilterFactory
pattern=([^a-z0-9]) replacement= replace=all /
filter class=solr.PatternReplaceFilterFactory
pattern=^(.{20})(.*)? replacement=$1 replace=all /
/analyzer
/fieldType
...
field name=ac type=autocomplete indexed=true stored=true
required=false /

Regards,
Dan




On Mon, 2008-07-07 at 17:12 +, sundar shankar wrote:
 Hi All,
I am using Solr for some time and am having trouble with an auto 
 complete feature that I have been trying to incorporate. I am indexing solr 
 as a database column to solr field mapping. I have tried various configs that 
 were mentioned in the solr user community suggestions and have tried a few 
 option of my own too. Each of them seem to either not bring me the exact data 
 I want or seems to get excess data.
 
 I have tried.
 text_ws,
 text,
 string
 EdgeNGramTokenizerFactory
 the subword example
 textTight
 and juggling arnd some of the filters and analysers togther.
 
 Couldnt get dismax to work as somehow it wasnt able to connect my field 
 defined in the schema to the qf param that I was passing in the request.
 
 Text tight was the best results I had but the problem there was it was 
 searching for whole words and not part words.
 example
 
 if my query String was field1:Word1 word2* I was getting back results but if 
 my query string was field1: Word1 wor* I didnt get a result back.
 
 I am little perplexed on how to implement this. I dont know what has to be 
 done.
 
 The schema
 
 
field name=institution.name type=text_ws indexed=true stored=true 
 termVectors=true/
!--Sundar changed city to subword so that spaces are ignored--
 
field name=instAlphaSort type=alphaOnlySort indexed=true 
 stored=false multiValued=true/
!-- Tight text cos we want results to be much the same for this--
field name=instText type=text indexed=true stored=true  
 termVectors=true multiValued=true/
field name=instString type=autosuggest indexed=true stored=true  
 termVectors=true multiValued=true/
 
field name=instSubword type=subword indexed=true stored=true 
 multiValued=true  termVectors=true/
field name=instTight type=textTight indexed=true stored=true 
 multiValued=true  termVectors=true/
 
 
 
 I Index institution.name only, the rest are copy fields of the same.
 
 
 Any help is appreciated.
 
 Thanks
 Sundar
 
 _
 Chose your Life Partner? Join MSN Matrimony
 http://www.shaadi.com/msn/matrimony.php 
 
 This email has been scanned for virus and spam content
Daniel Rosher
Developer
www.thehotonlinenetwork.com
d: 0207 3489 912

t: 0845 4680 568

f: 0845 4680 868

m: 

Beaumont House, Kensington Village, Avonmore Road, London, W14 
8TS



- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
- - - - - - - - - - - - - - - - -

This message is sent in confidence for the addressee only. It may contain 
privileged

information. The contents are not to be disclosed to anyone other than the 
addressee.

Unauthorised recipients are requested to preserve this confidentiality and 
to advise

us of any errors in transmission. Thank you.

hotonline ltd is registered in England  Wales. Registered office: One 
Canada Square,

Canary Wharf, London E14 5AP. Registered No: 1904765.


Integrate Solr with Tomcat in Linux

2008-07-08 Thread sandeep kaur
Hi,

  I have solr with jetty as server application running on Linux.

Could anyone please tell me the changes i need to make to integrate Tomcat with 
solr on Linux.

Thanks,
Sandip

--- On Mon, 7/7/08, Benson Margulies [EMAIL PROTECTED] wrote:

 From: Benson Margulies [EMAIL PROTECTED]
 Subject: Re: js client
 To: [EMAIL PROTECTED], solr-user solr-user@lucene.apache.org
 Date: Monday, 7 July, 2008, 11:43 PM
 The Javascript should have the right URL automatically if
 you get it from
 the ?js URL.
 
 Anyway, I think I was the first person to say
 'stupid' about that WSDL in
 the sample.
 
 I'm not at all clear on what you are doing at this
 point.
 
 Please send along  the URL that works for you in soapUI and
 the URL that
 works for you in the script.../script
 element.
 
 
 
 
 On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
 [EMAIL PROTECTED]
 wrote:
 
  On Sun, 2008-07-06 at 10:25 -0400, Benson Margulies
 wrote:
   In the sample, it is a relative URL to the web
 service endpoint. The
   sample starts from a stupid WSDL with silly names
 for the service and
   the port.
 
  I'm sorry about using the word stupid.
 
  
   Take your endpoint deployment URL, the very URL
 that is logged when
   your service starts up, and add ?js to the end of
 it. Period.
 
  Yes, that's what I do, and that part has been
 working all the time. What
  doesn't work is I use the same url without the ?js
 for the web service.
  Is there a way to see the Jetty log file? Mabye that
 will give me a clue
  what's happening. If nothing is in the jetty log
 file, I know the
  problem is elsewhere.
 
  Christine
 
  
   If it is
  
   http://wendy.christine.nl:9000/soap, make it
  
http://wendy.christine.nl:9000/soap?js
  
  
  
   The sample is taking advantage of relative URLs
 to avoid typing
   http://etc.
  
   On Sun, Jul 6, 2008 at 8:52 AM, Christine Karman
   [EMAIL PROTECTED] wrote:
   On Sun, 2008-07-06 at 07:37 -0400, Benson
 Margulies wrote:
The javascript client probably
 cannot handle redirects. If
   you are now
using ?js, you shouldn't need a
 redirect.
  
  
   well actually, the server redirect is
 similar to a rewrite. It
   makes /soap the same as /soap:9000.
 removing the redirect
   brings me back
   to the 650 error, access to
 restricted uri denied.
  
   What does the /Soap/SoapPort mean in the
 sample? how does that
   translate
   to my localhost:9000 or localhost/soap?
 (localhost ==
   wendy.christine.nl).
  
   It's silly that creating the web
 service from my java code is
   so very
   simple, and that some stupid javascript
 code can't be
   persuaded to work
   properly :-)  I've done my part of
 javascript but I have never
   liked it.
  
   CXF is really good. I was in a project a
 while ago where I
   suggested to
   use cxf, but someone who was supposed to
 have releavant
   experience
   insisted on using axis2. Took him a week
 to create some soap
   services.
   He had to remove all enums  and nested
 objects from the
   project because
   axis wouldn't support that. A friend
 of mine is using cxf in
   his project
   and he insisted I use it also.
  
   Christine
  
  
   
The browser allows an HTML page to
 load javascript from
   anywhere. Once
it has loaded javascript from a
 host:port, it will allow
   outbound
connections to that host:port.
   
So, once you use
 src=./?js you should be set.
   
The sample does not fully
 demonstrate this effect, since it
   has the
benefit of really  running the web
 service and the static
   HTML from
the very same host::port.
   
In the past, before there was such a
 thing as the ?js URL,
   the
solution here was a reverse proxy
 instead of a redirect. You
   set up
URL rewriting in plain old Apache 2
 so that xxx:9000 is
   transparently
available at xxx.
   
I don't have my recipe for this
 available at home, if you're
   still
stuck tomorrow I can dig it out of
 my office.
   
   
   
   
On Sun, Jul 6, 2008 at 7:20 AM,
 Christine
   [EMAIL PROTECTED]
wrote:
Benson,
I'm still struggling.
 This is what I have now.
I have copied the Greeter
 example
   (js-browser-client-simple)
from the
samples. Because
 cross-scripting is not allowed (I
   think that
was
causing the 650 error I got)
 I have created a
   redirect in my
 

Re: Integrate Solr with Tomcat in Linux

2008-07-08 Thread Shalin Shekhar Mangar
Take a look at http://wiki.apache.org/solr/SolrTomcat

Please avoid replying to an older message when you're starting a new topic.

On Tue, Jul 8, 2008 at 4:36 PM, sandeep kaur [EMAIL PROTECTED]
wrote:

 Hi,

  I have solr with jetty as server application running on Linux.

 Could anyone please tell me the changes i need to make to integrate Tomcat
 with solr on Linux.

 Thanks,
 Sandip

 --- On Mon, 7/7/08, Benson Margulies [EMAIL PROTECTED] wrote:

  From: Benson Margulies [EMAIL PROTECTED]
  Subject: Re: js client
  To: [EMAIL PROTECTED], solr-user solr-user@lucene.apache.org
  Date: Monday, 7 July, 2008, 11:43 PM
  The Javascript should have the right URL automatically if
  you get it from
  the ?js URL.
 
  Anyway, I think I was the first person to say
  'stupid' about that WSDL in
  the sample.
 
  I'm not at all clear on what you are doing at this
  point.
 
  Please send along  the URL that works for you in soapUI and
  the URL that
  works for you in the script.../script
  element.
 
 
 
 
  On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
  [EMAIL PROTECTED]
  wrote:
 
   On Sun, 2008-07-06 at 10:25 -0400, Benson Margulies
  wrote:
In the sample, it is a relative URL to the web
  service endpoint. The
sample starts from a stupid WSDL with silly names
  for the service and
the port.
  
   I'm sorry about using the word stupid.
  
   
Take your endpoint deployment URL, the very URL
  that is logged when
your service starts up, and add ?js to the end of
  it. Period.
  
   Yes, that's what I do, and that part has been
  working all the time. What
   doesn't work is I use the same url without the ?js
  for the web service.
   Is there a way to see the Jetty log file? Mabye that
  will give me a clue
   what's happening. If nothing is in the jetty log
  file, I know the
   problem is elsewhere.
  
   Christine
  
   
If it is
   
http://wendy.christine.nl:9000/soap, make it
   
 http://wendy.christine.nl:9000/soap?js
   
   
   
The sample is taking advantage of relative URLs
  to avoid typing
http://etc.
   
On Sun, Jul 6, 2008 at 8:52 AM, Christine Karman
[EMAIL PROTECTED] wrote:
On Sun, 2008-07-06 at 07:37 -0400, Benson
  Margulies wrote:
 The javascript client probably
  cannot handle redirects. If
you are now
 using ?js, you shouldn't need a
  redirect.
   
   
well actually, the server redirect is
  similar to a rewrite. It
makes /soap the same as /soap:9000.
  removing the redirect
brings me back
to the 650 error, access to
  restricted uri denied.
   
What does the /Soap/SoapPort mean in the
  sample? how does that
translate
to my localhost:9000 or localhost/soap?
  (localhost ==
wendy.christine.nl).
   
It's silly that creating the web
  service from my java code is
so very
simple, and that some stupid javascript
  code can't be
persuaded to work
properly :-)  I've done my part of
  javascript but I have never
liked it.
   
CXF is really good. I was in a project a
  while ago where I
suggested to
use cxf, but someone who was supposed to
  have releavant
experience
insisted on using axis2. Took him a week
  to create some soap
services.
He had to remove all enums  and nested
  objects from the
project because
axis wouldn't support that. A friend
  of mine is using cxf in
his project
and he insisted I use it also.
   
Christine
   
   

 The browser allows an HTML page to
  load javascript from
anywhere. Once
 it has loaded javascript from a
  host:port, it will allow
outbound
 connections to that host:port.

 So, once you use
  src=./?js you should be set.

 The sample does not fully
  demonstrate this effect, since it
has the
 benefit of really  running the web
  service and the static
HTML from
 the very same host::port.

 In the past, before there was such a
  thing as the ?js URL,
the
 solution here was a reverse proxy
  instead of a redirect. You
set up
 URL rewriting in plain old Apache 2
  so that xxx:9000 is
transparently
 available at xxx.

 I don't have my recipe for this
  available at home, if you're
still
 stuck tomorrow I can dig it out of
  my office.




 On Sun, Jul 6, 2008 at 7:20 AM,
  Christine
[EMAIL PROTECTED]
 wrote:
 Benson,
 I'm still 

Re: Integrate Solr with Tomcat in Linux

2008-07-08 Thread sandeep kaur
thanks and sorry, i will take care of this next time


--- On Tue, 8/7/08, Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:

 From: Shalin Shekhar Mangar [EMAIL PROTECTED]
 Subject: Re: Integrate Solr with Tomcat in Linux
 To: solr-user@lucene.apache.org, [EMAIL PROTECTED]
 Date: Tuesday, 8 July, 2008, 4:40 PM
 Take a look at http://wiki.apache.org/solr/SolrTomcat
 
 Please avoid replying to an older message when you're
 starting a new topic.
 
 On Tue, Jul 8, 2008 at 4:36 PM, sandeep kaur
 [EMAIL PROTECTED]
 wrote:
 
  Hi,
 
   I have solr with jetty as server application running
 on Linux.
 
  Could anyone please tell me the changes i need to make
 to integrate Tomcat
  with solr on Linux.
 
  Thanks,
  Sandip
 
  --- On Mon, 7/7/08, Benson Margulies
 [EMAIL PROTECTED] wrote:
 
   From: Benson Margulies
 [EMAIL PROTECTED]
   Subject: Re: js client
   To: [EMAIL PROTECTED], solr-user
 solr-user@lucene.apache.org
   Date: Monday, 7 July, 2008, 11:43 PM
   The Javascript should have the right URL
 automatically if
   you get it from
   the ?js URL.
  
   Anyway, I think I was the first person to say
   'stupid' about that WSDL in
   the sample.
  
   I'm not at all clear on what you are doing at
 this
   point.
  
   Please send along  the URL that works for you in
 soapUI and
   the URL that
   works for you in the
 script.../script
   element.
  
  
  
  
   On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
   [EMAIL PROTECTED]
   wrote:
  
On Sun, 2008-07-06 at 10:25 -0400, Benson
 Margulies
   wrote:
 In the sample, it is a relative URL to
 the web
   service endpoint. The
 sample starts from a stupid WSDL with
 silly names
   for the service and
 the port.
   
I'm sorry about using the word
 stupid.
   

 Take your endpoint deployment URL, the
 very URL
   that is logged when
 your service starts up, and add ?js to
 the end of
   it. Period.
   
Yes, that's what I do, and that part has
 been
   working all the time. What
doesn't work is I use the same url
 without the ?js
   for the web service.
Is there a way to see the Jetty log file?
 Mabye that
   will give me a clue
what's happening. If nothing is in the
 jetty log
   file, I know the
problem is elsewhere.
   
Christine
   

 If it is

 http://wendy.christine.nl:9000/soap,
 make it

  http://wendy.christine.nl:9000/soap?js



 The sample is taking advantage of
 relative URLs
   to avoid typing
 http://etc.

 On Sun, Jul 6, 2008 at 8:52 AM,
 Christine Karman
 [EMAIL PROTECTED] wrote:
 On Sun, 2008-07-06 at 07:37
 -0400, Benson
   Margulies wrote:
  The javascript client
 probably
   cannot handle redirects. If
 you are now
  using ?js, you
 shouldn't need a
   redirect.


 well actually, the server
 redirect is
   similar to a rewrite. It
 makes /soap the same as
 /soap:9000.
   removing the redirect
 brings me back
 to the 650 error, access
 to
   restricted uri denied.

 What does the /Soap/SoapPort
 mean in the
   sample? how does that
 translate
 to my localhost:9000 or
 localhost/soap?
   (localhost ==
 wendy.christine.nl).

 It's silly that creating
 the web
   service from my java code is
 so very
 simple, and that some stupid
 javascript
   code can't be
 persuaded to work
 properly :-)  I've done my
 part of
   javascript but I have never
 liked it.

 CXF is really good. I was in a
 project a
   while ago where I
 suggested to
 use cxf, but someone who was
 supposed to
   have releavant
 experience
 insisted on using axis2. Took
 him a week
   to create some soap
 services.
 He had to remove all enums  and
 nested
   objects from the
 project because
 axis wouldn't support that.
 A friend
   of mine is using cxf in
 his project
 and he insisted I use it also.

 Christine


 
  The browser allows an HTML
 page to
   load javascript from
 anywhere. Once
  it has loaded javascript
 from a
   host:port, it will allow
 outbound
  connections to that
 host:port.
 
  So, once you use
   src=./?js you should be set.
 
  The sample does not fully
   demonstrate this effect, since it
 has the
  benefit of really  running
 the web
   service and the static
 HTML from
  the very same host::port.
 
  In the past, before there
 was such a
   thing as the ?js URL,
 the
  solution here was a
 reverse proxy
   instead of a redirect. You

problems with SpellCheckComponent

2008-07-08 Thread Roberto Nieto
Hi,

I have downloaded the trunk version today and I´m having problems with the
SpellCheckComponent. Its any known bug?

This is my configuration:
#
searchComponent name=spellcheck
  class=org.apache.solr.handler.component.SpellCheckComponent
  lst name=defaults
   !-- omp = Only More Popular --
   str name=spellcheck.onlyMorePopularfalse/str
   !-- exr = Extended Results --
   str name=spellcheck.extendedResultsfalse/str
   !--  The number of suggestions to return --
   str name=spellcheck.count1/str
  /lst
  str name=queryAnalyzerFieldTypetext/str

  lst name=spellchecker
   str name=namedefault/str
   str name=fieldtitle/str
   str name=spellcheckIndexDirspellchecker_defaultXX/str

  /lst
 /searchComponent

 queryConverter name=queryConverter
  class=org.apache.solr.spelling.SpellingQueryConverter /

 requestHandler name=/spellCheckCompRH
  class=org.apache.solr.handler.component.SearchHandler
  arr name=last-components
   strspellcheck/str
  /arr
 /requestHandler
##

SCHEMA.XML:... field name=*title* type=*text* indexed=*true* stored=
*true* / ...

When I made:
http://localhost:8080/solr/spellCheckCompRH?q=*:*spellcheck.q=ruckspellcheck=true

I have this exception:

Estado HTTP 500 - null java.lang.NullPointerException at
org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:217)
at
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:184)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:156)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:128)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1025) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:272)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at
org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:852)
at
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:584)
at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1508)
at java.lang.Thread.run(Unknown Source)

Any help will be very usefull for me. Thanks for your attention.

Rober


Re: problems with SpellCheckComponent

2008-07-08 Thread Shalin Shekhar Mangar
Hi Roberto,

1. Why do you have those asterisk characters in your schema field
definition?
2. Did you do a spellcheck.build=true before issuing the first spell check
request?

Also, as per the latest docs on the wiki (
http://wiki.apache.org/solr/SpellCheckComponent ), the defaults section
should be moved to the /spellCheckCompRH section.

On Tue, Jul 8, 2008 at 5:37 PM, Roberto Nieto [EMAIL PROTECTED] wrote:

 Hi,

 I have downloaded the trunk version today and I´m having problems with the
 SpellCheckComponent. Its any known bug?

 This is my configuration:
 #
 searchComponent name=spellcheck
  class=org.apache.solr.handler.component.SpellCheckComponent
  lst name=defaults
   !-- omp = Only More Popular --
   str name=spellcheck.onlyMorePopularfalse/str
   !-- exr = Extended Results --
   str name=spellcheck.extendedResultsfalse/str
   !--  The number of suggestions to return --
   str name=spellcheck.count1/str
  /lst
  str name=queryAnalyzerFieldTypetext/str

  lst name=spellchecker
   str name=namedefault/str
   str name=fieldtitle/str
   str name=spellcheckIndexDirspellchecker_defaultXX/str

  /lst
  /searchComponent

  queryConverter name=queryConverter
  class=org.apache.solr.spelling.SpellingQueryConverter /

  requestHandler name=/spellCheckCompRH
  class=org.apache.solr.handler.component.SearchHandler
  arr name=last-components
   strspellcheck/str
  /arr
  /requestHandler
 ##

 SCHEMA.XML:... field name=*title* type=*text* indexed=*true*
 stored=
 *true* / ...

 When I made:

 http://localhost:8080/solr/spellCheckCompRH?q=*:*spellcheck.q=ruckspellcheck=true

 I have this exception:

 Estado HTTP 500 - null java.lang.NullPointerException at

 org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:217)
 at

 org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:184)
 at

 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:156)
 at

 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:128)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1025) at

 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
 at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:272)
 at

 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at

 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at

 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at

 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
 at

 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
 at

 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at

 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
 at

 org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:852)
 at

 org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:584)
 at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1508)
 at java.lang.Thread.run(Unknown Source)

 Any help will be very usefull for me. Thanks for your attention.

 Rober




-- 
Regards,
Shalin Shekhar Mangar.


Re: problems with SpellCheckComponent

2008-07-08 Thread Geoffrey Young



When I made:
http://localhost:8080/solr/spellCheckCompRH?q=*:*spellcheck.q=ruckspellcheck=true

I have this exception:

Estado HTTP 500 - null java.lang.NullPointerException at
org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:217)



I see this all the time - to the point where I wonder how stable the new 
component is.


I've *think* I've traced it to

  o the presence of both q *and* spellcheck.q
  o and *any* restart of solr without re-issuing spellcheck.build=true

I haven't been using any form of spellchecker for long, but I'm 
reasonably sure that I didn't need to rebuild on every restart.  I also 
used to think it was changes to schema.xml (and not a simple restart) 
that caused the issue, but I've seen the exception with no changes. 
I've also seen the exception pop up without a restart when the server 
sits overnight (last query of the day ok, go to sleep, query again in 
the morning and *boom*)


but regardless of restart issues, I've never seen it happen with just 
the q or just the spellcheck.q fields in my query - it's always when 
they're both there.


--Geoff


Pre-processor for stored fields

2008-07-08 Thread Hugo Barauna
Hi,

I already haved aked this, but I didn't get any good answer, so I will try
again. I need to pre-process a stored field before it is saved. Just like a
field that is gonna be indexed. I would be good to apply an analyzer to this
stored field.

My problem is that I have to send to solr html documents and use a HTML
filter to remove the HTML tags. But that doesn't work for the stored
representation of that field.

I found some possible https://issues.apache.org/jira/browse/SOLR-314
solutions https://issues.apache.org/jira/browse/SOLR-269 to my problem,
but I would like to know if there is something better.

Thanks!

-- 
Hugo Pessoa de Baraúna

Se vc faz tudo igual a todo mundo, não pode esperar resultados diferentes.

http://hugobarauna.blogspot.com/


Re: First version of solr javascript client to review

2008-07-08 Thread Matthias Epheser

Erik Hatcher schrieb:

Mattias,

Nice start!

One comment

In test.html:
  new $sj.solrjs.Manager({solrUrl:http://localhost:8983/solr/select});

  It would be better to omit the /select from the base URL.  Consider 
Solr rooted at a particular URL without the request handler mapping 
attached to it, and allow the /select or qt=whatever to be added at 
a different stage.  This way SolrJS can attach to any number of request 
handlers depending on context.


yep sounds good. In fact, this url can be used very customized. It just has to 
be the url that delivers the json/html/whatever, Maybe some users use a proxy 
(eg apache) to mount a solr select under www.mydomain.com/solrrequest or similar.


So we could split it into three parameters for the manager: baseUrl (eg. 
http://localhost:8983/solr/), selectPath (eg. select) and querytype (eg. 
standard). baseUrl should be mandatory, the others should provide the default 
values if not specified.




Ryan commented on using something besides JavaScript to generate HTML 
pieces.  One alternative that I propose is to use a 
VelocityResponseWriter (see SOLR-620 for a first draft) to generate the 
HTML framework which SolrJS can then deal with.  I'm personally fonder 
of the bulk of HTML generated server-side, even snippets that are AJAXed 
in, leaving SolrJS to mainly deal with widgets and getting JSON data to 
them when HTML generation doesn't make sense.  SOLR-620 is still raw, 
and needs some work to make it really easy to leverage, but I wanted to 
toss it out there for folks to improve upon as needed.


The first draft looks good. I'll test it in a template with the standard solr 
response object, maybe it will be handier to use the SolrJ one or another 
wrapper object that provides easier access

 to the response values.

Concerning HTML creation:

I like your suggestion, and as we are very flexible on the client, I propose 
building two base classes upon AbstractWidget:


- AbstractServerSideWidget: these widgets specify the template file parameter 
and paste the response simply into the target div. Every widget will get an id 
inside the manager (currently it's simply an incremented integer) and pass this 
id as requets parameter to the solr server, so you can create action links in 
the velocity template like:


a href=javascript:solrjsManager.selectItems(#widgetid#,queryItems) link /a


- AbstractClientSideWidget: These work like the existing example, retrieving 
json objects and create html using jquery.


I think it strongly depends on the widgets nature which one suits better.



Also, consider wiring in a SIMILE Timeline (or perhaps even SIMILE 
Exhibit) widget - it'd make for a snazzy example :)  Of course, Google 
Maps would make a great example as well.


Yep, I'll add them to the wiki page. Once the velocity integration and 
implementation of the two mentioned base classes is finished, we can start 
building our widgets. I'll try to get a velocity example widget running in the 
next few days and will post a small message in this list when trunk is updated


regards,
matthias



Erik



On Jul 1, 2008, at 4:00 AM, Matthias Epheser wrote:


Hi community,

as described here 
http://www.nabble.com/Announcement-of-Solr-Javascript-Client-to17462581.html I 
started to work on a javascript widget library for solr.


I've now finished the basic framework work:

- creating a jquery environment
- creating helpers for jquery inheritance
- testing the json communication between client and solr
- creating a base class AbstractWidget

After that, I implemented a SimpleFacet widget and 2 ResultViews to 
show how these things will work in the new jquery environment.


A new wiki page documents this progress:
http://wiki.apache.org/solr/SolrJS

Check out the trunk at:
http://solrstuff.org/svn/solrjs/trunk/


Feedback about the implementation, the quality of code (documentation, 
integration into html, customizing the widgets) as well as ideas for 
future widgets etc. is very welcome.


regards
matthias






Automated Index Creation

2008-07-08 Thread Willie Wong
Hi,

Sorry if this question sounds daft but I was wondering if there was 
anything built into Solr that allows you to automate the creation of new 
indexes once they reach a certain size or point in time.  I looked briefly 
at the documentation on CollectionDestribution, but it seems more geared 
to towards replicatting to other production servers...I'm looking for 
something that is more along the lines of archiving indexes for later 
use...


Thanks,

Willie



Re: Pre-processor for stored fields

2008-07-08 Thread Norberto Meijome
On Tue, 8 Jul 2008 10:20:15 -0300
Hugo Barauna [EMAIL PROTECTED] wrote:

 Hi,
 
 I already haved aked this, but I didn't get any good answer, so I will try
 again. I need to pre-process a stored field before it is saved. Just like a
 field that is gonna be indexed. I would be good to apply an analyzer to this
 stored field.
 
 My problem is that I have to send to solr html documents and use a HTML
 filter to remove the HTML tags. But that doesn't work for the stored
 representation of that field.
 
 I found some possible https://issues.apache.org/jira/browse/SOLR-314
 solutions https://issues.apache.org/jira/browse/SOLR-269 to my problem,
 but I would like to know if there is something better.
 
 Thanks!
 

Hi Hugo,
I replied to your email on June 30th. The answer seems to be the same. If you 
have other specific questions, shoot.

B

_
{Beto|Norberto|Numard} Meijome

Anyone who isn't confused here doesn't really understand what's going on.

I speak for myself, not my employer. Contents may be hot. Slippery when wet. 
Reading disclaimers makes you go blind. Writing them is worse. You have been 
Warned.


Re: problems with SpellCheckComponent

2008-07-08 Thread Geoffrey Young



Shalin Shekhar Mangar wrote:

Hi Geoff,

I can't find anything in the code which would give this exception when both
q and spellcheck.q is specified. Though, this exception is certainly
possible when you restart solr. Anyways, I'll look into it more deeply.


great, thanks.



There are a few ways in which we can improve this component. For example a
lot of this trouble can go away if we can reload the spell index on startup
if it exists or build it if it does not exist (SOLR-593 would need to be
resolved for this). With SOLR-605 committed, we can now add an option to
re-build the index (built from Solr fields) on commits by adding a listener
using the API. There are a few issues with collation which are being handled
in SOLR-606.

I'll open new issues to track these items. Please bear with us since this is
a new component and may take a few iterations to stabilize. Thank you for
helping us find these issues :)


np - this is a great feature to have and it's going to save me some 
effort as we prepare for deployment, so it's worth taking the time to 
work out the bugs.


thanks for your effort.

--Geoff


Re: Automated Index Creation

2008-07-08 Thread Ryan McKinley
nothing to automatically create a new index, but check the multicore  
stuff to see how you could implement this:

http://wiki.apache.org/solr/MultiCore


On Jul 8, 2008, at 10:25 AM, Willie Wong wrote:

Hi,

Sorry if this question sounds daft but I was wondering if there  
was
anything built into Solr that allows you to automate the creation of  
new
indexes once they reach a certain size or point in time.  I looked  
briefly
at the documentation on CollectionDestribution, but it seems more  
geared
to towards replicatting to other production servers...I'm  
looking for

something that is more along the lines of archiving indexes for later
use...


Thanks,

Willie





Re: Automated Index Creation

2008-07-08 Thread Shalin Shekhar Mangar
Hi Willie,

If you want to have backups (point-in-time snapshots) then you'd need
something similar to the snapshooter script used in replication. I believe
it creates hard links to files of the current index in a new directory
marked with the timestamp. You can either use snapshooter itself or create
your own script by modifying snapshooter to create copies instead of
hardlinks if you want. You can use the RunExecutableListener to run your
script on every commit or optimize and use the snapshots for backup
purposes.

On Tue, Jul 8, 2008 at 7:55 PM, Willie Wong [EMAIL PROTECTED]
wrote:

 Hi,

 Sorry if this question sounds daft but I was wondering if there was
 anything built into Solr that allows you to automate the creation of new
 indexes once they reach a certain size or point in time.  I looked briefly
 at the documentation on CollectionDestribution, but it seems more geared
 to towards replicatting to other production servers...I'm looking for
 something that is more along the lines of archiving indexes for later
 use...


 Thanks,

 Willie




-- 
Regards,
Shalin Shekhar Mangar.


Re: Pre-processor for stored fields

2008-07-08 Thread Ryan McKinley
If all you are doing is stripping text from HTML, the best option is  
probably to just do that on the client *before* you send it to solr.


If you need to do something more complex -- or that needs to rely on  
other solr configurations you can consider using an  
UpdateRequestProcessor.  Likely you would override the processAdd  
function and augment/modify the document coming in.


An example of this is in the locallucene project, check:
https://locallucene.svn.sourceforge.net/svnroot/locallucene/trunk/localsolr/src/com/pjaol/search/solr/update/LocalUpdateProcessorFactory.java

ryan



On Jul 8, 2008, at 9:20 AM, Hugo Barauna wrote:

Hi,

I already haved aked this, but I didn't get any good answer, so I  
will try
again. I need to pre-process a stored field before it is saved. Just  
like a
field that is gonna be indexed. I would be good to apply an analyzer  
to this

stored field.

My problem is that I have to send to solr html documents and use a  
HTML

filter to remove the HTML tags. But that doesn't work for the stored
representation of that field.

I found some possible https://issues.apache.org/jira/browse/SOLR-314
solutions https://issues.apache.org/jira/browse/SOLR-269 to my  
problem,

but I would like to know if there is something better.

Thanks!

--
Hugo Pessoa de Baraúna

Se vc faz tudo igual a todo mundo, não pode esperar resultados  
diferentes.


http://hugobarauna.blogspot.com/




Re: Automated Index Creation

2008-07-08 Thread Ryan McKinley

re-reading your post...

Shalin is correct, just use the snapshooter script to create a point- 
in-time snapshot of the index.  The multicore stuff will not help with  
this.


ryan


On Jul 8, 2008, at 11:09 AM, Shalin Shekhar Mangar wrote:

Hi Willie,

If you want to have backups (point-in-time snapshots) then you'd need
something similar to the snapshooter script used in replication. I  
believe

it creates hard links to files of the current index in a new directory
marked with the timestamp. You can either use snapshooter itself or  
create

your own script by modifying snapshooter to create copies instead of
hardlinks if you want. You can use the RunExecutableListener to run  
your

script on every commit or optimize and use the snapshots for backup
purposes.

On Tue, Jul 8, 2008 at 7:55 PM, Willie Wong [EMAIL PROTECTED] 


wrote:


Hi,

Sorry if this question sounds daft but I was wondering if there  
was
anything built into Solr that allows you to automate the creation  
of new
indexes once they reach a certain size or point in time.  I looked  
briefly
at the documentation on CollectionDestribution, but it seems more  
geared
to towards replicatting to other production servers...I'm  
looking for

something that is more along the lines of archiving indexes for later
use...


Thanks,

Willie





--
Regards,
Shalin Shekhar Mangar.




Re: problems with SpellCheckComponent

2008-07-08 Thread Shalin Shekhar Mangar
The spellcheck.q parameter is optional. However, the q parameter is
compulsory. So you can write q=macrosoft and avoid spellcheck.q altogether.
The difference behind it is that spellcheck.q is used if present and it uses
the query analyzer of the Solr field used to build the index whereas, the q
parameter uses a WhitespaceAnalyzer on the queries.

Also note that you'll need to specify spellcheck.build=true only on the
first request when it will build the spell check index. The subsequent
requests need not have spellcheck.build=true.

On Tue, Jul 8, 2008 at 9:03 PM, [EMAIL PROTECTED] wrote:

 Hi,

 Thanks for your help.

 I can't understand the part when Geoff says that I've never seen it happen
 with just the q or just the spellcheck.q fields in my query.

 That's means that I can do, for example:
http://192.168.92.5:8080/solr/spellCheckCompRH?
 spellcheck.q=macrosoftspellcheck=truespellcheck.build=truehttp://192.168.92.5:8080/solr/spellCheckCompRH?spellcheck.q=macrosoftspellcheck=truespellcheck.build=true

 I usually do things like:


 http://192.168.92.5:8080/solr/spellCheckCompRH?q=aspellcheck.q=macrosoftsp
 ellcheck=truespellcheck.build=truehttp://192.168.92.5:8080/solr/spellCheckCompRH?q=aspellcheck.q=macrosoftspellcheck=truespellcheck.build=true

 I don't know if I am understanding correctly but if I try the first thing I
 have this exception:

 Estado HTTP 500 - null java.lang.NullPointerException at
 org.apache.solr.common.util.StrUtils.splitSmart(StrUtils.java:36) at
 org.apache.solr.search.OldLuceneQParser.parse(LuceneQParserPlugin.java:104)
 at org.apache.solr.search.QParser.getQuery(QParser.java:87) at

 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java
 :82) at

 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHand
 ler.java:135) at

 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.
 java:128) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1025) at

 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:3
 38) at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
 272) at

 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Application
 FilterChain.java:235) at

 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterCh
 ain.java:206) at

 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.ja
 va:233) at

 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.ja
 va:175) at

 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128
 ) at

 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102
 ) at

 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java
 :109) at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
 at

 org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:
 852) at

 org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(H
 ttp11AprProtocol.java:584) at
 org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1508) at
 java.lang.Thread.run(Unknown Source)


 I hope this help you.
 If you need some kind of test don't doubt to tell me.

 Rober.

 -Mensaje original-
 De: Geoffrey Young [mailto:[EMAIL PROTECTED]
 Enviado el: martes, 08 de julio de 2008 16:58
 Para: solr-user@lucene.apache.org
 Asunto: Re: problems with SpellCheckComponent



 Shalin Shekhar Mangar wrote:
  Hi Geoff,
 
  I can't find anything in the code which would give this exception when
 both
  q and spellcheck.q is specified. Though, this exception is certainly
  possible when you restart solr. Anyways, I'll look into it more deeply.

 great, thanks.

 
  There are a few ways in which we can improve this component. For example
 a
  lot of this trouble can go away if we can reload the spell index on
 startup
  if it exists or build it if it does not exist (SOLR-593 would need to be
  resolved for this). With SOLR-605 committed, we can now add an option to
  re-build the index (built from Solr fields) on commits by adding a
 listener
  using the API. There are a few issues with collation which are being
 handled
  in SOLR-606.
 
  I'll open new issues to track these items. Please bear with us since this
 is
  a new component and may take a few iterations to stabilize. Thank you for
  helping us find these issues :)

 np - this is a great feature to have and it's going to save me some
 effort as we prepare for deployment, so it's worth taking the time to
 work out the bugs.

 thanks for your effort.

 --Geoff




-- 
Regards,
Shalin Shekhar Mangar.


RE: Auto complete

2008-07-08 Thread sundar shankar
Hi Daniel,
 Thanks for the code. I just did observe that you have 
EdgeNGramFilterFactory. I didnt find it in the 1.2 Solr version. Which version 
are you using for this. 1.3 isnt out yet rite. Is there any other production 
version of Solr available that I can use?
 
Regards
Sundar



 Subject: Re: Auto complete From: [EMAIL PROTECTED] To: 
 solr-user@lucene.apache.org Date: Tue, 8 Jul 2008 11:30:31 +0100  Hi,  
 This is how we implement our autocomplete feature, excerpt from schema.xml 
  -First accept the input as is without alteration -Lowercase the input, and 
 eliminate all non a-z0-9 chars to normalize the input -split into multiple 
 tokens with EdgeNGramFilterFactory upto a max of 100 chars, all starting 
 from the beginning of the input, e.g. hello becomes h,he,hel,hell,hello.  
 -For queries we accept the first 20 chars.  Hope this helps.   
 fieldType name=autocomplete class=solr.TextField analyzer 
 type=index tokenizer class=solr.KeywordTokenizerFactory/ filter 
 class=solr.LowerCaseFilterFactory / filter 
 class=solr.PatternReplaceFilterFactory pattern=([^a-z0-9]) 
 replacement= replace=all / filter class=solr.EdgeNGramFilterFactory 
 maxGramSize=100 minGramSize=1 / /analyzer analyzer type=query 
 tokenizer class=solr.KeywordTokenizerFactory/ filter 
 class=solr.LowerCaseFilterFactory / filter 
 class=solr.PatternReplaceFilterFactory pattern=([^a-z0-9]) 
 replacement= replace=all / filter 
 class=solr.PatternReplaceFilterFactory pattern=^(.{20})(.*)? 
 replacement=$1 replace=all / /analyzer /fieldType ... field 
 name=ac type=autocomplete indexed=true stored=true required=false 
 /  Regards, Dan On Mon, 2008-07-07 at 17:12 +, sundar 
 shankar wrote:  Hi All,  I am using Solr for some time and am having 
 trouble with an auto complete feature that I have been trying to incorporate. 
 I am indexing solr as a database column to solr field mapping. I have tried 
 various configs that were mentioned in the solr user community suggestions 
 and have tried a few option of my own too. Each of them seem to either not 
 bring me the exact data I want or seems to get excess data.I have 
 tried.  text_ws,  text,  string  EdgeNGramTokenizerFactory  the 
 subword example  textTight  and juggling arnd some of the filters and 
 analysers togther.Couldnt get dismax to work as somehow it wasnt able 
 to connect my field defined in the schema to the qf param that I was passing 
 in the request.Text tight was the best results I had but the problem 
 there was it was searching for whole words and not part words.  example  
   if my query String was field1:Word1 word2* I was getting back results but 
 if my query string was field1: Word1 wor* I didnt get a result back.I 
 am little perplexed on how to implement this. I dont know what has to be 
 done.The schema  field name=institution.name 
 type=text_ws indexed=true stored=true termVectors=true/  
 !--Sundar changed city to subword so that spaces are ignored--
 field name=instAlphaSort type=alphaOnlySort indexed=true 
 stored=false multiValued=true/  !-- Tight text cos we want results to 
 be much the same for this--  field name=instText type=text 
 indexed=true stored=true termVectors=true multiValued=true/  
 field name=instString type=autosuggest indexed=true stored=true 
 termVectors=true multiValued=true/field name=instSubword 
 type=subword indexed=true stored=true multiValued=true 
 termVectors=true/  field name=instTight type=textTight 
 indexed=true stored=true multiValued=true termVectors=true/ 
I Index institution.name only, the rest are copy fields of the same.  
 Any help is appreciated.Thanks  Sundar
 _  Chose 
 your Life Partner? Join MSN Matrimony  
 http://www.shaadi.com/msn/matrimony.php This email has been scanned 
 for virus and spam content Daniel Rosher Developer 
 www.thehotonlinenetwork.com d: 0207 3489 912  t: 0845 4680 568  f: 0845 
 4680 868  m:   Beaumont House, Kensington Village, Avonmore Road, London, 
 W14 8TS- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - - - - - - - - - - - - - -  This message is sent in 
 confidence for the addressee only. It may contain privileged  information. 
 The contents are not to be disclosed to anyone other than the addressee.  
 Unauthorised recipients are requested to preserve this confidentiality and to 
 advise  us of any errors in transmission. Thank you.  hotonline ltd is 
 registered in England  Wales. Registered office: One Canada Square,  
 Canary Wharf, London E14 5AP. Registered No: 1904765.
_
Wish to Marry Now? Join Shaadi.com FREE! 
http://www.shaadi.com/registration/user/index.php?ptnr=mhottag

Re: Automated Index Creation

2008-07-08 Thread Ryan McKinley


deletequery*:*/query/delete

will wipe all data in the index...


On Jul 8, 2008, at 12:05 PM, Willie Wong wrote:

Thanks Sahlin and Ryan for your posts...

I think the snapshooter will work fine for creating the indexes and  
then I
can use the multicore capabilities to make them available to  
users one
final question though, after snapshot has been created is there a  
way to
totally clear out the contents in the master index - or have solr  
recreate

the data directory?



Thanks,

Willie




Ryan McKinley [EMAIL PROTECTED]
08/07/2008 11:17 AM
Please respond to
solr-user@lucene.apache.org


To
solr-user@lucene.apache.org
cc

Subject
Re: Automated Index Creation






re-reading your post...

Shalin is correct, just use the snapshooter script to create a point-
in-time snapshot of the index.  The multicore stuff will not help with
this.

ryan


On Jul 8, 2008, at 11:09 AM, Shalin Shekhar Mangar wrote:

Hi Willie,

If you want to have backups (point-in-time snapshots) then you'd need
something similar to the snapshooter script used in replication. I
believe
it creates hard links to files of the current index in a new  
directory

marked with the timestamp. You can either use snapshooter itself or
create
your own script by modifying snapshooter to create copies instead of
hardlinks if you want. You can use the RunExecutableListener to run
your
script on every commit or optimize and use the snapshots for backup
purposes.

On Tue, Jul 8, 2008 at 7:55 PM, Willie Wong

[EMAIL PROTECTED]



wrote:


Hi,

Sorry if this question sounds daft but I was wondering if there
was
anything built into Solr that allows you to automate the creation
of new
indexes once they reach a certain size or point in time.  I looked
briefly
at the documentation on CollectionDestribution, but it seems more
geared
to towards replicatting to other production servers...I'm
looking for
something that is more along the lines of archiving indexes for  
later

use...


Thanks,

Willie





--
Regards,
Shalin Shekhar Mangar.








Re: Auto complete

2008-07-08 Thread Shalin Shekhar Mangar
He must be using a nightly build of Solr 1.3 -- I think you can consider
using it as it is quite stable and close to release.

On Tue, Jul 8, 2008 at 10:38 PM, sundar shankar [EMAIL PROTECTED]
wrote:

 Hi Daniel,
 Thanks for the code. I just did observe that you have
 EdgeNGramFilterFactory. I didnt find it in the 1.2 Solr version. Which
 version are you using for this. 1.3 isnt out yet rite. Is there any other
 production version of Solr available that I can use?

 Regards
 Sundar



  Subject: Re: Auto complete From: [EMAIL PROTECTED] To:
 solr-user@lucene.apache.org Date: Tue, 8 Jul 2008 11:30:31 +0100  Hi,
  This is how we implement our autocomplete feature, excerpt from
 schema.xml  -First accept the input as is without alteration -Lowercase
 the input, and eliminate all non a-z0-9 chars to normalize the input
 -split into multiple tokens with EdgeNGramFilterFactory upto a max of 100
 chars, all starting from the beginning of the input, e.g. hello becomes
 h,he,hel,hell,hello.  -For queries we accept the first 20 chars.  Hope
 this helps.   fieldType name=autocomplete class=solr.TextField
 analyzer type=index tokenizer class=solr.KeywordTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory / filter
 class=solr.PatternReplaceFilterFactory pattern=([^a-z0-9])
 replacement= replace=all / filter
 class=solr.EdgeNGramFilterFactory maxGramSize=100 minGramSize=1 /
 /analyzer analyzer type=query tokenizer
 class=solr.KeywordTokenizerFactory/ filter
 class=solr.LowerCaseFilterFactory / filter
 class=solr.PatternReplaceFilterFactory pattern=([^a-z0-9])
 replacement= replace=all / filter
 class=solr.PatternReplaceFilterFactory pattern=^(.{20})(.*)?
 replacement=$1 replace=all / /analyzer /fieldType ... field
 name=ac type=autocomplete indexed=true stored=true required=false
 /  Regards, Dan On Mon, 2008-07-07 at 17:12 +, sundar
 shankar wrote:  Hi All,  I am using Solr for some time and am having
 trouble with an auto complete feature that I have been trying to
 incorporate. I am indexing solr as a database column to solr field mapping.
 I have tried various configs that were mentioned in the solr user community
 suggestions and have tried a few option of my own too. Each of them seem to
 either not bring me the exact data I want or seems to get excess data.  
  I have tried.  text_ws,  text,  string  EdgeNGramTokenizerFactory
  the subword example  textTight  and juggling arnd some of the filters
 and analysers togther.Couldnt get dismax to work as somehow it wasnt
 able to connect my field defined in the schema to the qf param that I was
 passing in the request.Text tight was the best results I had but the
 problem there was it was searching for whole words and not part words. 
 exampleif my query String was field1:Word1 word2* I was getting back
 results but if my query string was field1: Word1 wor* I didnt get a result
 back.I am little perplexed on how to implement this. I dont know
 what has to be done.The schema  field name=
 institution.name type=text_ws indexed=true stored=true
 termVectors=true/  !--Sundar changed city to subword so that spaces
 are ignored--field name=instAlphaSort type=alphaOnlySort
 indexed=true stored=false multiValued=true/  !-- Tight text cos we
 want results to be much the same for this--  field name=instText
 type=text indexed=true stored=true termVectors=true
 multiValued=true/  field name=instString type=autosuggest
 indexed=true stored=true termVectors=true multiValued=true/   
 field name=instSubword type=subword indexed=true stored=true
 multiValued=true termVectors=true/  field name=instTight
 type=textTight indexed=true stored=true multiValued=true
 termVectors=true/I Index institution.name only, the
 rest are copy fields of the same.  Any help is appreciated.   
 Thanks  Sundar   
 _  Chose
 your Life Partner? Join MSN Matrimony 
 http://www.shaadi.com/msn/matrimony.php This email has been
 scanned for virus and spam content Daniel Rosher Developer
 www.thehotonlinenetwork.com d: 0207 3489 912  t: 0845 4680 568  f:
 0845 4680 868  m:   Beaumont House, Kensington Village, Avonmore Road,
 London, W14 8TS- - - - - - - - - - - - - - - - - - - - - - - - - - -
 - - - - - - - - - - - - - - - - - - - - - - - - - - - -  This message is
 sent in confidence for the addressee only. It may contain privileged 
 information. The contents are not to be disclosed to anyone other than the
 addressee.  Unauthorised recipients are requested to preserve this
 confidentiality and to advise  us of any errors in transmission. Thank
 you.  hotonline ltd is registered in England  Wales. Registered office:
 One Canada Square,  Canary Wharf, London E14 5AP. Registered No: 1904765.
 _
 Wish to Marry Now? Join Shaadi.com FREE!
 http://www.shaadi.com/registration/user/index.php?ptnr=mhottag




-- 

Re: solr synonyms behaviour

2008-07-08 Thread swarag


hossman wrote:
 
 This is Issue #1 regarding trying to use query time multi word synonyms 
 discussed on the wiki...
 
 The Lucene QueryParser tokenizes on white space before giving any 
 text to the Analyzer, so if a person searches for the words sea biscit 
 the analyzer will be given the words sea and biscit seperately, and 
 will not know that they match a synonym.
 
 on the boosting part of the query (where the dismax handler 
 automagically quote the entire input and queries it against the pf 
 fields, the synonyms do get used (because the whole input is analyzed as 
 one string) but in this case the phrase queries will match any of these 
 phrases...
 
divorce dispute resolution
alternative mediation resolution
divorce mediation resolution
etc...
 
 ..it will *NOT* match either of these phrases...
 
divorce mediation
alternative dispute resolution
 
 ...because the SynonymFilter has no way to tell the query parser which 
 words should be linked to which other words when building up the phrase 
 query.  
 
 This is Issue #2 regarding trying to use query time multi word synonyms
 discussed on the wiki...
 
 Phrase searching (ie: sea biscit) will cause the QueryParser to pass 
 the entire string to the analyzer, but if the SynonymFilter is 
 configured to expand the synonyms, then when the QueryParser gets the  
 resulting list of tokens back from the Analyzer, it will construct a  
 MultiPhraseQuery that will not have the desired effect. This is because  
 of the limited mechanism available for the Analyzer to indicate that 
 two terms occupy the same position: there is no way to indicate that a  
 phrase occupies the same position as a term. For our example the  
 resulting MultiPhraseQuery would be (sea | sea | seabiscuit) (biscuit 
 | biscit) which would not match the simple case of seabisuit 
 occuring in a document
 
 : I have the synonym filter only at query time coz i can't re-index data
 (or
 : portion of data) everytime i add a synonym and a couple of other
 reasons.
 
 Use cases like yours will *never* work as a query time synonym ... hence 
 all of the information about multi-word synonyms and the caveats about 
 using them in the wiki...
 
 http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#SynonymFilter
 
 
 -Hoss
 
 
 

We have a very similar problem, and want to make sure that this is hopeless
with Solr before we try something else...

I have a synonyms.txt file similar to the following:
bar=bar, club
club=club, bar, night club
...

A search for 'bar' returns the exact results we want: anything with 'bar' or
'club' in the name.  However, a search for 'club' produces very strange
results: name:(club bar night) club

Knowing the Lucene struggles with multi-word query-time synonyms, my
question is, does this also affect index-time synonyms? What other
alternatives do we have if we require there to be multiple word synonyms?

-- 
View this message in context: 
http://www.nabble.com/solr-synonyms-behaviour-tp15051211p18349953.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Integrate Solr with Tomcat in Linux

2008-07-08 Thread sandeep kaur

Hi,

As i am running tomcat after copying the solr files to appropriate tomcat 
directories, i am getting the followin error in the catalina log:

Jul 8, 2008 10:30:02 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The Apache Tomcat Native library which allows optimal performance in 
production environments was not found on the java.library.path: 
/usr/java/jdk1.6.0_06
/jre/lib/i386/client:/usr/java/jdk1.6.0_06/jre/lib/i386:/usr/java/jdk1.6.0_06/jr
e/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
Jul 8, 2008 10:30:02 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Jul 8, 2008 10:30:02 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 285 ms
Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.9
Jul 8, 2008 10:30:02 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config getInstanceDir
INFO: Using JNDI solr.home: /home/user_name/softwares
Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config setInstanceDir
INFO: Solr home set to '/home/user_name/softwares/'
Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
SEVERE: Error filterStart
Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
SEVERE: Context [/solr] startup failed due to previous errors
Jul 8, 2008 10:30:03 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Jul 8, 2008 10:30:03 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Jul 8, 2008 10:30:03 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/30  config=null
Jul 8, 2008 10:30:03 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 589 ms

In the browser while typing http://localhost:8080/solr/admin

i am getting the following error

HTTP Status 404 - /solr/admin

type Status report

message /solr/admin

description The requested resource (/solr/admin) is not available.
Apache Tomcat/6.0.9

Could anyone please suggest how to resolve this error.

Thanks,
Sandip


--- On Tue, 8/7/08, Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:

 From: Shalin Shekhar Mangar [EMAIL PROTECTED]
 Subject: Re: Integrate Solr with Tomcat in Linux
 To: solr-user@lucene.apache.org, [EMAIL PROTECTED]
 Date: Tuesday, 8 July, 2008, 4:40 PM
 Take a look at http://wiki.apache.org/solr/SolrTomcat
 
 Please avoid replying to an older message when you're
 starting a new topic.
 
 On Tue, Jul 8, 2008 at 4:36 PM, sandeep kaur
 [EMAIL PROTECTED]
 wrote:
 
  Hi,
 
   I have solr with jetty as server application running
 on Linux.
 
  Could anyone please tell me the changes i need to make
 to integrate Tomcat
  with solr on Linux.
 
  Thanks,
  Sandip
 
  --- On Mon, 7/7/08, Benson Margulies
 [EMAIL PROTECTED] wrote:
 
   From: Benson Margulies
 [EMAIL PROTECTED]
   Subject: Re: js client
   To: [EMAIL PROTECTED], solr-user
 solr-user@lucene.apache.org
   Date: Monday, 7 July, 2008, 11:43 PM
   The Javascript should have the right URL
 automatically if
   you get it from
   the ?js URL.
  
   Anyway, I think I was the first person to say
   'stupid' about that WSDL in
   the sample.
  
   I'm not at all clear on what you are doing at
 this
   point.
  
   Please send along  the URL that works for you in
 soapUI and
   the URL that
   works for you in the
 script.../script
   element.
  
  
  
  
   On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
   [EMAIL PROTECTED]
   wrote:
  
On Sun, 2008-07-06 at 10:25 -0400, Benson
 Margulies
   wrote:
 In the sample, it is a relative URL to
 the web
   service endpoint. The
 sample starts from a stupid WSDL with
 silly names
   for the service and
 the port.
   
I'm sorry about using the word
 stupid.
   

 Take your endpoint deployment URL, the
 very URL
   that is logged when
 your service starts up, and add ?js to
 the end of
   it. Period.
   
Yes, that's what I do, and that part has
 been
   working all the time. What
doesn't work is I use the same url
 without the ?js
   for the web service.
Is there a way to see the Jetty log file?
 Mabye that
   will give me a clue
what's happening. If nothing is in the
 jetty log
   file, I know the
problem is elsewhere.
   
Christine
   

 If it is

 http://wendy.christine.nl:9000/soap,
 make it

  http://wendy.christine.nl:9000/soap?js



 The sample is taking advantage of
 relative URLs
   to avoid typing
 http://etc.

 On Sun, Jul 6, 2008 at 8:52 AM,
 Christine Karman
 [EMAIL PROTECTED] wrote:
 On Sun, 2008-07-06 at 07:37
 -0400, Benson
   Margulies wrote:
  The javascript client
 probably
   cannot handle redirects. If
 

Re: problems with SpellCheckComponent

2008-07-08 Thread Norberto Meijome
On Tue, 8 Jul 2008 21:10:51 +0530
Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:

 Also note that you'll need to specify spellcheck.build=true only on the
 first request when it will build the spell check index. The subsequent
 requests need not have spellcheck.build=true.

as a matter of fact, you won't want to have spellchecker.build=true
in every request, as it will impact in your server's performance ... impact may
be minimal if SOLR can compare timestamps of both spellchecker and main index
and avoid rebuilding SP..I don't know if this is how it is implemented.

You really only want to do rebuild after a commit.

B

_
{Beto|Norberto|Numard} Meijome

Any society that would give up a little liberty to gain a little security will
deserve neither and lose both. Benjamin Franklin

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.


Re: Automated Index Creation

2008-07-08 Thread Norberto Meijome
On Tue, 8 Jul 2008 12:05:45 -0400
Willie Wong [EMAIL PROTECTED] wrote:

 I think the snapshooter will work fine for creating the indexes and then I 
 can use the multicore capabilities to make them available to users one 
 final question though, after snapshot has been created is there a way to 
 totally clear out the contents in the master index - or have solr recreate 
 the data directory?

yup. But you would have to have predefined cores where to publish the data
to... would be interesting ( useful ?? ) if cores could be dynamically created
and added/registered to a running instance of SOLR. Is that supported already?

B
_
{Beto|Norberto|Numard} Meijome

Ask not what's inside your head, but what your head's inside of.
   J. J. Gibson

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.


Re: Automated Index Creation

2008-07-08 Thread Shalin Shekhar Mangar
Yes, SOLR-350 added that capability. Look at
http://wiki.apache.org/solr/MultiCore for details.

On Wed, Jul 9, 2008 at 6:26 AM, Norberto Meijome [EMAIL PROTECTED]
wrote:

 On Tue, 8 Jul 2008 12:05:45 -0400
 Willie Wong [EMAIL PROTECTED] wrote:

  I think the snapshooter will work fine for creating the indexes and then
 I
  can use the multicore capabilities to make them available to users
 one
  final question though, after snapshot has been created is there a way to
  totally clear out the contents in the master index - or have solr
 recreate
  the data directory?

 yup. But you would have to have predefined cores where to publish the data
 to... would be interesting ( useful ?? ) if cores could be dynamically
 created
 and added/registered to a running instance of SOLR. Is that supported
 already?

 B
 _
 {Beto|Norberto|Numard} Meijome

 Ask not what's inside your head, but what your head's inside of.
   J. J. Gibson

 I speak for myself, not my employer. Contents may be hot. Slippery when
 wet.
 Reading disclaimers makes you go blind. Writing them is worse. You have
 been
 Warned.




-- 
Regards,
Shalin Shekhar Mangar.


Re: Integrate Solr with Tomcat in Linux

2008-07-08 Thread Noble Paul നോബിള്‍ नोब्ळ्
The context 'solr' is not  initialized. The most likely reson is that
you have not set the solr.home correctly.
--Noble

On Wed, Jul 9, 2008 at 3:24 AM, sandeep kaur [EMAIL PROTECTED] wrote:

 Hi,

 As i am running tomcat after copying the solr files to appropriate tomcat 
 directories, i am getting the followin error in the catalina log:

 Jul 8, 2008 10:30:02 PM org.apache.catalina.core.AprLifecycleListener init
 INFO: The Apache Tomcat Native library which allows optimal performance in 
 production environments was not found on the java.library.path: 
 /usr/java/jdk1.6.0_06
 /jre/lib/i386/client:/usr/java/jdk1.6.0_06/jre/lib/i386:/usr/java/jdk1.6.0_06/jr
 e/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
 Jul 8, 2008 10:30:02 PM org.apache.coyote.http11.Http11Protocol init
 INFO: Initializing Coyote HTTP/1.1 on http-8080
 Jul 8, 2008 10:30:02 PM org.apache.catalina.startup.Catalina load
 INFO: Initialization processed in 285 ms
 Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardService start
 INFO: Starting service Catalina
 Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardEngine start
 INFO: Starting Servlet Engine: Apache Tomcat/6.0.9
 Jul 8, 2008 10:30:02 PM org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config getInstanceDir
 INFO: Using JNDI solr.home: /home/user_name/softwares
 Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config setInstanceDir
 INFO: Solr home set to '/home/user_name/softwares/'
 Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
 SEVERE: Error filterStart
 Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
 SEVERE: Context [/solr] startup failed due to previous errors
 Jul 8, 2008 10:30:03 PM org.apache.coyote.http11.Http11Protocol start
 INFO: Starting Coyote HTTP/1.1 on http-8080
 Jul 8, 2008 10:30:03 PM org.apache.jk.common.ChannelSocket init
 INFO: JK: ajp13 listening on /0.0.0.0:8009
 Jul 8, 2008 10:30:03 PM org.apache.jk.server.JkMain start
 INFO: Jk running ID=0 time=0/30  config=null
 Jul 8, 2008 10:30:03 PM org.apache.catalina.startup.Catalina start
 INFO: Server startup in 589 ms

 In the browser while typing http://localhost:8080/solr/admin

 i am getting the following error

 HTTP Status 404 - /solr/admin

 type Status report

 message /solr/admin

 description The requested resource (/solr/admin) is not available.
 Apache Tomcat/6.0.9

 Could anyone please suggest how to resolve this error.

 Thanks,
 Sandip


 --- On Tue, 8/7/08, Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:

 From: Shalin Shekhar Mangar [EMAIL PROTECTED]
 Subject: Re: Integrate Solr with Tomcat in Linux
 To: solr-user@lucene.apache.org, [EMAIL PROTECTED]
 Date: Tuesday, 8 July, 2008, 4:40 PM
 Take a look at http://wiki.apache.org/solr/SolrTomcat

 Please avoid replying to an older message when you're
 starting a new topic.

 On Tue, Jul 8, 2008 at 4:36 PM, sandeep kaur
 [EMAIL PROTECTED]
 wrote:

  Hi,
 
   I have solr with jetty as server application running
 on Linux.
 
  Could anyone please tell me the changes i need to make
 to integrate Tomcat
  with solr on Linux.
 
  Thanks,
  Sandip
 
  --- On Mon, 7/7/08, Benson Margulies
 [EMAIL PROTECTED] wrote:
 
   From: Benson Margulies
 [EMAIL PROTECTED]
   Subject: Re: js client
   To: [EMAIL PROTECTED], solr-user
 solr-user@lucene.apache.org
   Date: Monday, 7 July, 2008, 11:43 PM
   The Javascript should have the right URL
 automatically if
   you get it from
   the ?js URL.
  
   Anyway, I think I was the first person to say
   'stupid' about that WSDL in
   the sample.
  
   I'm not at all clear on what you are doing at
 this
   point.
  
   Please send along  the URL that works for you in
 soapUI and
   the URL that
   works for you in the
 script.../script
   element.
  
  
  
  
   On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
   [EMAIL PROTECTED]
   wrote:
  
On Sun, 2008-07-06 at 10:25 -0400, Benson
 Margulies
   wrote:
 In the sample, it is a relative URL to
 the web
   service endpoint. The
 sample starts from a stupid WSDL with
 silly names
   for the service and
 the port.
   
I'm sorry about using the word
 stupid.
   

 Take your endpoint deployment URL, the
 very URL
   that is logged when
 your service starts up, and add ?js to
 the end of
   it. Period.
   
Yes, that's what I do, and that part has
 been
   working all the time. What
doesn't work is I use the same url
 without the ?js
   for the web service.
Is there a way to see the Jetty log file?
 Mabye that
   will give me a clue
what's happening. If nothing is in the
 jetty log
   file, I know the
problem is elsewhere.
   
Christine
   

 If it is

 http://wendy.christine.nl:9000/soap,
 make it

  http://wendy.christine.nl:9000/soap?js



 The sample is taking advantage of
 relative URLs
   to avoid typing
 http://etc.

 On Sun, 

Re: Integrate Solr with Tomcat in Linux

2008-07-08 Thread Preetam Rao
set the solr home folder such that-

If you are using jndi name for solr.home or command line argument for
solr.home, then it will look for conf and lib folders under that folder.

If you are not using jndi name, then it looks for solr/conf and solr/lib
folders under current directory which is the directory you started tomcat
from.

You can get the conf and lib folders from the distributions example folder
also

Hope this helps

Thanks
Preetam

On Wed, Jul 9, 2008 at 9:28 AM, Noble Paul നോബിള്‍ नोब्ळ् 
[EMAIL PROTECTED] wrote:

 The context 'solr' is not  initialized. The most likely reson is that
 you have not set the solr.home correctly.
 --Noble

 On Wed, Jul 9, 2008 at 3:24 AM, sandeep kaur [EMAIL PROTECTED]
 wrote:
 
  Hi,
 
  As i am running tomcat after copying the solr files to appropriate tomcat
 directories, i am getting the followin error in the catalina log:
 
  Jul 8, 2008 10:30:02 PM org.apache.catalina.core.AprLifecycleListener
 init
  INFO: The Apache Tomcat Native library which allows optimal performance
 in production environments was not found on the java.library.path:
 /usr/java/jdk1.6.0_06
 
 /jre/lib/i386/client:/usr/java/jdk1.6.0_06/jre/lib/i386:/usr/java/jdk1.6.0_06/jr
  e/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
  Jul 8, 2008 10:30:02 PM org.apache.coyote.http11.Http11Protocol init
  INFO: Initializing Coyote HTTP/1.1 on http-8080
  Jul 8, 2008 10:30:02 PM org.apache.catalina.startup.Catalina load
  INFO: Initialization processed in 285 ms
  Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardService start
  INFO: Starting service Catalina
  Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardEngine start
  INFO: Starting Servlet Engine: Apache Tomcat/6.0.9
  Jul 8, 2008 10:30:02 PM org.apache.solr.servlet.SolrDispatchFilter init
  INFO: SolrDispatchFilter.init()
  Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config getInstanceDir
  INFO: Using JNDI solr.home: /home/user_name/softwares
  Jul 8, 2008 10:30:02 PM org.apache.solr.core.Config setInstanceDir
  INFO: Solr home set to '/home/user_name/softwares/'
  Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
  SEVERE: Error filterStart
  Jul 8, 2008 10:30:02 PM org.apache.catalina.core.StandardContext start
  SEVERE: Context [/solr] startup failed due to previous errors
  Jul 8, 2008 10:30:03 PM org.apache.coyote.http11.Http11Protocol start
  INFO: Starting Coyote HTTP/1.1 on http-8080
  Jul 8, 2008 10:30:03 PM org.apache.jk.common.ChannelSocket init
  INFO: JK: ajp13 listening on /0.0.0.0:8009
  Jul 8, 2008 10:30:03 PM org.apache.jk.server.JkMain start
  INFO: Jk running ID=0 time=0/30  config=null
  Jul 8, 2008 10:30:03 PM org.apache.catalina.startup.Catalina start
  INFO: Server startup in 589 ms
 
  In the browser while typing http://localhost:8080/solr/admin
 
  i am getting the following error
 
  HTTP Status 404 - /solr/admin
 
  type Status report
 
  message /solr/admin
 
  description The requested resource (/solr/admin) is not available.
  Apache Tomcat/6.0.9
 
  Could anyone please suggest how to resolve this error.
 
  Thanks,
  Sandip
 
 
  --- On Tue, 8/7/08, Shalin Shekhar Mangar [EMAIL PROTECTED]
 wrote:
 
  From: Shalin Shekhar Mangar [EMAIL PROTECTED]
  Subject: Re: Integrate Solr with Tomcat in Linux
  To: solr-user@lucene.apache.org, [EMAIL PROTECTED]
  Date: Tuesday, 8 July, 2008, 4:40 PM
  Take a look at http://wiki.apache.org/solr/SolrTomcat
 
  Please avoid replying to an older message when you're
  starting a new topic.
 
  On Tue, Jul 8, 2008 at 4:36 PM, sandeep kaur
  [EMAIL PROTECTED]
  wrote:
 
   Hi,
  
I have solr with jetty as server application running
  on Linux.
  
   Could anyone please tell me the changes i need to make
  to integrate Tomcat
   with solr on Linux.
  
   Thanks,
   Sandip
  
   --- On Mon, 7/7/08, Benson Margulies
  [EMAIL PROTECTED] wrote:
  
From: Benson Margulies
  [EMAIL PROTECTED]
Subject: Re: js client
To: [EMAIL PROTECTED], solr-user
  solr-user@lucene.apache.org
Date: Monday, 7 July, 2008, 11:43 PM
The Javascript should have the right URL
  automatically if
you get it from
the ?js URL.
   
Anyway, I think I was the first person to say
'stupid' about that WSDL in
the sample.
   
I'm not at all clear on what you are doing at
  this
point.
   
Please send along  the URL that works for you in
  soapUI and
the URL that
works for you in the
  script.../script
element.
   
   
   
   
On Mon, Jul 7, 2008 at 5:54 AM, Christine Karman
[EMAIL PROTECTED]
wrote:
   
 On Sun, 2008-07-06 at 10:25 -0400, Benson
  Margulies
wrote:
  In the sample, it is a relative URL to
  the web
service endpoint. The
  sample starts from a stupid WSDL with
  silly names
for the service and
  the port.

 I'm sorry about using the word
  stupid.

 
  Take your endpoint deployment URL, the
  very URL
that is logged when
  your