Build failed in Hudson: Solr-Nightly #322

2008-01-17 Thread hudson
See http://lucene.zones.apache.org:8080/hudson/job/Solr-Nightly/322/changes

--
started
ERROR: svn: PROPFIND request failed on '/repos/asf/lucene/solr/trunk'
svn: Connection timed out
org.tmatesoft.svn.core.SVNException: svn: PROPFIND request failed on 
'/repos/asf/lucene/solr/trunk'
svn: Connection timed out
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:49)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVUtil.findStartingProperties(DAVUtil.java:124)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.fetchRepositoryUUID(DAVConnection.java:88)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.testConnection(DAVRepository.java:85)
at 
hudson.scm.SubversionSCM$DescriptorImpl.checkRepositoryPath(SubversionSCM.java:1134)
at 
hudson.scm.SubversionSCM.repositoryLocationsExist(SubversionSCM.java:1195)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:335)
at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:292)
at hudson.model.AbstractProject.checkout(AbstractProject.java:541)
at 
hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:223)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:189)
at hudson.model.Run.run(Run.java:649)
at hudson.model.Build.run(Build.java:102)
at hudson.model.ResourceController.execute(ResourceController.java:70)
at hudson.model.Executor.run(Executor.java:64)
Publishing Javadoc
Recording test results



[jira] Created: (SOLR-460) Improvment to highlighting infrastructure

2008-01-17 Thread Sergey Dryganets (JIRA)
Improvment to highlighting infrastructure
-

 Key: SOLR-460
 URL: https://issues.apache.org/jira/browse/SOLR-460
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Affects Versions: 1.3
Reporter: Sergey Dryganets


Now I'm write a plugin for SOLR to highlight custom user data

ie my application send to handler on Solr set of documents to highlight in xml 
format

handler parse xml, create list of Document (not Lucene object just my 
abstraction which content MapfieldName,fieldValue
it's enough to properly highlight fields)

So I'm think  you can change SolrHighlighter:

public NamedListObject doHighlighting(DocList docs, Query query, 
SolrQueryRequest req, String[] defaultFields) 
to
public NamedListObject doHighlighting(ListDocument docs, Query query, 
SolrQueryRequest req, String[] defaultFields) 

or create some type of provider to retrieve list of abstract Document's

and Highlighter will be more reusable :)


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-459) SolrDispatchFilter bug or wrong default parameter

2008-01-17 Thread Sergey Dryganets (JIRA)
SolrDispatchFilter bug or wrong default parameter
-

 Key: SOLR-459
 URL: https://issues.apache.org/jira/browse/SOLR-459
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: latest solr version from repository
Reporter: Sergey Dryganets


String path = req.getServletPath();
suggest what we have handler named highlight and solr running on 
http://localhost:8080/solr

we request url
http://localhost:8080/solr/highlight

so path == /highlight;

if( pathPrefix != null  path.startsWith( pathPrefix ) ) {
  path = path.substring( pathPrefix.length() );
}
default pathPrefix == null
so path == /highlight;

int idx = path.indexOf( ':' );
if( idx  0 ) {
  // save the portion after the ':' for a 'handler' path parameter
  path = path.substring( 0, idx );
 }

not change path too


so we try to request handler by name /highlight but real handler name is 
highlight

 (There are normalization inside getRequestHandler method but it's remove slash 
in the end of path)
handler = core.getRequestHandler( path );



After change default value of pathPrefix to / (in web.xml) all work's fine



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: solrj patch to COMMIT with xml

2008-01-17 Thread Ryan McKinley

Did adding a request handler to /update fix your issue?

I added some docs to:
http://wiki.apache.org/solr/Solrj

And a warning message to the deprecated SolrUpdateServlet:
http://svn.apache.org/viewvc?view=revrevision=612896

ryan


Keene, David wrote:
Running under tomcat 5.5.23 


Sending the following command (productId is my document primary key):



RE: solrj patch to COMMIT with xml

2008-01-17 Thread Keene, David
Yes, that did it.  I'm sorry about the trouble!

-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 17, 2008 10:01 AM
To: solr-dev@lucene.apache.org
Subject: Re: solrj patch to COMMIT with xml

Did adding a request handler to /update fix your issue?

I added some docs to:
http://wiki.apache.org/solr/Solrj

And a warning message to the deprecated SolrUpdateServlet:
http://svn.apache.org/viewvc?view=revrevision=612896

ryan


Keene, David wrote:
 Running under tomcat 5.5.23 
 
 Sending the following command (productId is my document primary key):
 


[jira] Commented: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Mike Klaas (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12560053#action_12560053
 ] 

Mike Klaas commented on SOLR-461:
-

Isn't this essentially the same thing as the hl.maxAnalyzedChars parameter?

http://wiki.apache.org/solr/HighlightingParameters


 Highlighting TokenStream Truncation capability
 --

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 It is sometimes the case when generating snippets that one need not 
 fragment/analyze the whole document (especially for large documents) in order 
 to show meaningful snippet highlights. 
 Patch to follow that adds a counting TokenFilter that returns null after X 
 number of Tokens have been seen.  This filter will then be hooked into the 
 SolrHighlighter and configurable via solrconfig.xml.  The default value will 
 be Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field 
 Length is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Grant Ingersoll (JIRA)
Highlighting TokenStream Truncation capability
--

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor


It is sometimes the case when generating snippets that one need not 
fragment/analyze the whole document (especially for large documents) in order 
to show meaningful snippet highlights. 

Patch to follow that adds a counting TokenFilter that returns null after X 
number of Tokens have been seen.  This filter will then be hooked into the 
SolrHighlighter and configurable via solrconfig.xml.  The default value will be 
Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field Length 
is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll updated SOLR-461:
-

Attachment: SOLR-461.patch

Well, here's the patch, although I didn't test for coordination with the 
maxChars piece.

 Highlighting TokenStream Truncation capability
 --

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: SOLR-461.patch


 It is sometimes the case when generating snippets that one need not 
 fragment/analyze the whole document (especially for large documents) in order 
 to show meaningful snippet highlights. 
 Patch to follow that adds a counting TokenFilter that returns null after X 
 number of Tokens have been seen.  This filter will then be hooked into the 
 SolrHighlighter and configurable via solrconfig.xml.  The default value will 
 be Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field 
 Length is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Work started: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on SOLR-461 started by Grant Ingersoll.

 Highlighting TokenStream Truncation capability
 --

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 It is sometimes the case when generating snippets that one need not 
 fragment/analyze the whole document (especially for large documents) in order 
 to show meaningful snippet highlights. 
 Patch to follow that adds a counting TokenFilter that returns null after X 
 number of Tokens have been seen.  This filter will then be hooked into the 
 SolrHighlighter and configurable via solrconfig.xml.  The default value will 
 be Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field 
 Length is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Mike Klaas (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12560069#action_12560069
 ] 

Mike Klaas commented on SOLR-461:
-

The max characters thing is directly from the lucene contrib highlighter.  It 
is based on token offset, so it counts whitespace, and doesn't cut of a token 
in the middle.

It also analogous to the RegexFragmenter's maxAnalyzedChars parameter, which 
can't be token-based.

I'm not sure it is wise to add two apis with virtually the same functionality.  
Anyone who wants to set a high limit will have to set both.

However, it might be nice to make the token filter a pluggable component, so 
that users can insert this token filter if they want.

 Highlighting TokenStream Truncation capability
 --

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 It is sometimes the case when generating snippets that one need not 
 fragment/analyze the whole document (especially for large documents) in order 
 to show meaningful snippet highlights. 
 Patch to follow that adds a counting TokenFilter that returns null after X 
 number of Tokens have been seen.  This filter will then be hooked into the 
 SolrHighlighter and configurable via solrconfig.xml.  The default value will 
 be Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field 
 Length is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-461) Highlighting TokenStream Truncation capability

2008-01-17 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12560057#action_12560057
 ] 

Grant Ingersoll commented on SOLR-461:
--

I suppose it is similar, but I don't find counting characters all that 
intuitive.  A token based approach doesn't cut off in the middle of a word and 
it isn't clear to me whether it is counting whitespace characters, etc.  Plus, 
it is analogous to Lucene's Max Field Length, which is token based as well.

 Highlighting TokenStream Truncation capability
 --

 Key: SOLR-461
 URL: https://issues.apache.org/jira/browse/SOLR-461
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 It is sometimes the case when generating snippets that one need not 
 fragment/analyze the whole document (especially for large documents) in order 
 to show meaningful snippet highlights. 
 Patch to follow that adds a counting TokenFilter that returns null after X 
 number of Tokens have been seen.  This filter will then be hooked into the 
 SolrHighlighter and configurable via solrconfig.xml.  The default value will 
 be Integer.MAX_VALUE or, I suppose, it could be set to whatever Max Field 
 Length is set to, as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



solr plug-in - external document store

2008-01-17 Thread Ben Incani
Hi Solr Developers,

I would like to implement an external document storage system using
Solr.

Currently I am building lucene indexes with Solr and creating a binary
field, which is used to store a document.  I would like to extend this
implementation by storing the document in an external container system
(independent of Solr) using a simple file system mechanism or a more
robust solution like Jackrabbit.

Presently I am using Solr plug-ins to create additional field types,
which implement the binary storage system, but would like to know what
would be the best method to extend Solr to redirect insertions (add
requests) to an external storage system?

Regards,

Ben


Re: solr plug-in - external document store

2008-01-17 Thread Ryan McKinley
You will either want to do something with a custom RequestHandler, or 
plug into the UpdateRequestProcessor framework

 http://wiki.apache.org/solr/UpdateRequestProcessor

On the search/retrieval side, you will probably want to implement a 
SearchComponent to fill in the stored fields:

 http://wiki.apache.org/solr/SearchComponent  (wiki page *way* out of date)

Both of these are 1.3-dev features

ryan


Ben Incani wrote:

Hi Solr Developers,

I would like to implement an external document storage system using
Solr.

Currently I am building lucene indexes with Solr and creating a binary
field, which is used to store a document.  I would like to extend this
implementation by storing the document in an external container system
(independent of Solr) using a simple file system mechanism or a more
robust solution like Jackrabbit.

Presently I am using Solr plug-ins to create additional field types,
which implement the binary storage system, but would like to know what
would be the best method to extend Solr to redirect insertions (add
requests) to an external storage system?

Regards,

Ben





[jira] Updated: (SOLR-418) Editorial Query Boosting Component

2008-01-17 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-418:
--

Attachment: SOLR-418-QueryBoosting.patch

Looks good Ryan!
I reviewed, and changed a few minor things (new patch attached)
- fixed a concurrency bug (access of map outside of sync can lead to concurrent 
modification exception or other errors, even if that key/value pair will never 
change)
- changed the example example.xml a little, and switched the /elevate handler 
to load lazily
- updated code/configs to reflect SearchHandler move
- fixed (pre-existing) bugs in code moved to VersionedFile (multiple opens of 
same file)
- dropped the seemingly  unrelated changes in SolrServlet (part of another 
patch?)

 Editorial Query Boosting Component
 --

 Key: SOLR-418
 URL: https://issues.apache.org/jira/browse/SOLR-418
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Ryan McKinley
 Fix For: 1.3

 Attachments: SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch


 For a given query string, a human editor can say what documents should be 
 important.  This is related to a lucene discussion:
 http://www.nabble.com/Forced-Top-Document-tf4682070.html#a13408965
 Ideally, the position could be determined explicitly by the editor - 
 otherwise increasing the boost is probably sufficient.
 This patch uses the Search Component framework to inject custom document 
 boosting into the standard SearchHandler.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



LargeVolumeJettyTest failure

2008-01-17 Thread Yonik Seeley
I'm getting an intermittent failure in LargeVolumeJettyTest.
I haven't had a chance to look into it, but does anyone know what's up
with this?
-Yonik

testcase classname=org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest
name=testMultiThreaded time=4.797
failure message=expected:lt;500gt; but was:lt;510gt;
type=junit.framework.AssertionFailedErrorjunit.framework.AssertionFailedError:
expected:lt;500gt; but was:lt;510gt;
at 
org.apache.solr.client.solrj.LargeVolumeTestBase.query(LargeVolumeTestBase.java:61)
at 
org.apache.solr.client.solrj.LargeVolumeTestBase.testMultiThreaded(LargeVolumeTestBase.java:53)
  /failure
/testcase


Re: LargeVolumeJettyTest failure

2008-01-17 Thread Ryan McKinley
I have seen it, but it always went away with a clean checkout.  I was 
able to reproduce it, and then can make it stay away by adding 
deleteByQuery( *:* ) -- see:

 http://svn.apache.org/viewvc?rev=613056view=rev

If you see it pop up, holler.

ryan


Yonik Seeley wrote:

I'm getting an intermittent failure in LargeVolumeJettyTest.
I haven't had a chance to look into it, but does anyone know what's up
with this?
-Yonik

testcase classname=org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest
name=testMultiThreaded time=4.797
failure message=expected:lt;500gt; but was:lt;510gt;
type=junit.framework.AssertionFailedErrorjunit.framework.AssertionFailedError:
expected:lt;500gt; but was:lt;510gt;
at 
org.apache.solr.client.solrj.LargeVolumeTestBase.query(LargeVolumeTestBase.java:61)
at 
org.apache.solr.client.solrj.LargeVolumeTestBase.testMultiThreaded(LargeVolumeTestBase.java:53)
  /failure
/testcase





Re: LargeVolumeJettyTest failure

2008-01-17 Thread Yonik Seeley
Ahh, that does match what I saw... I did a clean checkout and it worked,
then I applied an unrelated patch and it failed after that.

On Jan 18, 2008 12:11 AM, Ryan McKinley [EMAIL PROTECTED] wrote:
 I have seen it, but it always went away with a clean checkout.  I was
 able to reproduce it, and then can make it stay away by adding
 deleteByQuery( *:* ) -- see:
   http://svn.apache.org/viewvc?rev=613056view=rev

 If you see it pop up, holler.

 ryan



 Yonik Seeley wrote:
  I'm getting an intermittent failure in LargeVolumeJettyTest.
  I haven't had a chance to look into it, but does anyone know what's up
  with this?
  -Yonik
 
  testcase 
  classname=org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest
  name=testMultiThreaded time=4.797
  failure message=expected:lt;500gt; but was:lt;510gt;
  type=junit.framework.AssertionFailedErrorjunit.framework.AssertionFailedError:
  expected:lt;500gt; but was:lt;510gt;
at 
  org.apache.solr.client.solrj.LargeVolumeTestBase.query(LargeVolumeTestBase.java:61)
at 
  org.apache.solr.client.solrj.LargeVolumeTestBase.testMultiThreaded(LargeVolumeTestBase.java:53)
/failure
  /testcase
 




[jira] Commented: (SOLR-418) Editorial Query Boosting Component

2008-01-17 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12560228#action_12560228
 ] 

Ryan McKinley commented on SOLR-418:


Thanks for looking at this - and fixing it up

bq. dropped the seemingly unrelated changes in SolrServlet (part of another 
patch?)

not sure how that got in there it was part of an issue I had with resin 
loading servlets before filters and SOLR-350 initialization.


 Editorial Query Boosting Component
 --

 Key: SOLR-418
 URL: https://issues.apache.org/jira/browse/SOLR-418
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Fix For: 1.3

 Attachments: SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch, SOLR-418-QueryBoosting.patch, 
 SOLR-418-QueryBoosting.patch


 For a given query string, a human editor can say what documents should be 
 important.  This is related to a lucene discussion:
 http://www.nabble.com/Forced-Top-Document-tf4682070.html#a13408965
 Ideally, the position could be determined explicitly by the editor - 
 otherwise increasing the boost is probably sufficient.
 This patch uses the Search Component framework to inject custom document 
 boosting into the standard SearchHandler.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-350) Manage Multiple SolrCores

2008-01-17 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12560234#action_12560234
 ] 

Ryan McKinley commented on SOLR-350:


Hi Henri-

We're getting there  but I had trouble applying this patch, can you post a 
new one with a few changes?

1. can you change your editor settings to use two spaces rather then tabs?  In 
general, solr code should have two spaces rather then tabs or 4 spaces.

2. To avoid confusion with o.a.s.request.XMLWriter, can we call XmlWriter 
something else?  XmlWriterHelper? XmlWriterUtils?

3. Can we make XmlWriter a package protected class in o.a.s.core?  This way we 
don't have to make it part of the public API.  If there is a need for it later, 
we can easily move it.  Also, if it can be replaced with an off the shelf 
library, we can do that later without mucking anyone up.

Thanks for your work and patience with this!

 Manage Multiple SolrCores
 -

 Key: SOLR-350
 URL: https://issues.apache.org/jira/browse/SOLR-350
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
Reporter: Ryan McKinley
 Attachments: SOLR-350-MultiCore.patch, SOLR-350-MultiCore.patch, 
 SOLR-350-MultiCore.patch, SOLR-350-Naming.patch, SOLR-350-Naming.patch, 
 solr-350.patch, solr-350.patch, solr-350.patch, solr-350.patch, 
 solr-350.patch, solr-350.patch


 In SOLR-215, we enabled support for more then one SolrCore - but there is no 
 way to use them yet.
 We need to make some interface to manage, register, modify avaliable SolrCores

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Closed: (SOLR-459) SolrDispatchFilter bug or wrong default parameter

2008-01-17 Thread Sergey Dryganets (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Dryganets closed SOLR-459.
-

Resolution: Fixed

Sorry, today from Ryan commit I'm see that it's just feature :)

+  Add: requestHandler name=\/update\ 
class=\solr.XmlUpdateRequestHandler\  to your solrconfig.xml\n\n );
}

 SolrDispatchFilter bug or wrong default parameter
 -

 Key: SOLR-459
 URL: https://issues.apache.org/jira/browse/SOLR-459
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: latest solr version from repository
Reporter: Sergey Dryganets

 String path = req.getServletPath();
 suggest what we have handler named highlight and solr running on 
 http://localhost:8080/solr
 we request url
 http://localhost:8080/solr/highlight
 so path == /highlight;
 if( pathPrefix != null  path.startsWith( pathPrefix ) ) {
   path = path.substring( pathPrefix.length() );
 }
 default pathPrefix == null
 so path == /highlight;
 int idx = path.indexOf( ':' );
 if( idx  0 ) {
   // save the portion after the ':' for a 'handler' path parameter
   path = path.substring( 0, idx );
  }
 not change path too
 so we try to request handler by name /highlight but real handler name is 
 highlight
  (There are normalization inside getRequestHandler method but it's remove 
 slash in the end of path)
 handler = core.getRequestHandler( path );
 After change default value of pathPrefix to / (in web.xml) all work's fine

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.