Re: VOTE: RC0 Release apache-solr-ref-guide-4.6.pdf

2013-11-30 Thread Shalin Shekhar Mangar
Thanks Ahmet. I have fixed both pages.

On Thu, Nov 28, 2013 at 10:18 PM, Ahmet Arslan iori...@yahoo.com wrote:


 Hi,

 On page 293 : rm -r shard*/solr/zoo_data should be rm -r 
 node*/solr/zoo_data
 On page 297 : ... shard, an d forwards ... should be ... shard, and 
 forwards ...

 Thanks,
 Ahmet





 On Wednesday, November 27, 2013 2:47 PM, Cassandra Targett 
 casstarg...@gmail.com wrote:

 I noticed a couple of small typos and inconsistencies that I've fixed,
 but I don't think they warrant a respin. They're more for appearance
 than for any factual problems.

 +1

 Sorry for the delay from me - I've been traveling for holidays.

 On Tue, Nov 26, 2013 at 4:22 AM, Jan Høydahl jan@cominvent.com wrote:
 * Page 5: Screenshots with 4.0.0-beta texts
 * Page 165: Links to 4.0.0 version of JavaDoc (now fixed in Confluence)
 * Page 204: Table - group.func - Supported only in Sol4r 4.0. (should be 
 Supported since Solr 4.0.) (now fixed in Confluence)
 * Page 308: Strange xml code box layout, why all the whitespace?

 But these are minors, so here's my +1

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. nov. 2013 kl. 19:34 skrev Chris Hostetter hossman_luc...@fucit.org:


 Please VOTE to release the following as apache-solr-ref-guide-4.6.pdf ...

 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-4.6-RC0/

 $ cat apache-solr-ref-guide-4.6.pdf.sha1
 7ad494c5a3cdc085e01a54d507ae33a75cc319e6  apache-solr-ref-guide-4.6.pdf




 -Hoss

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release apache-solr-ref-guide-4.6.pdf

2013-11-30 Thread Shalin Shekhar Mangar
On Tue, Nov 26, 2013 at 12:04 AM, Chris Hostetter
hossman_luc...@fucit.org wrote:

 Please VOTE to release the following as apache-solr-ref-guide-4.6.pdf ...

 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-4.6-RC0/

 $ cat apache-solr-ref-guide-4.6.pdf.sha1
 7ad494c5a3cdc085e01a54d507ae33a75cc319e6  apache-solr-ref-guide-4.6.pdf



+1 to release

-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5515) NPE when getting stats on date field with empty result on solrcloud

2013-11-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5515:
---

Assignee: Shalin Shekhar Mangar

 NPE when getting stats on date field with empty result on solrcloud
 ---

 Key: SOLR-5515
 URL: https://issues.apache.org/jira/browse/SOLR-5515
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5.1, 4.6
 Environment: ubuntu 13.04, java 1.7.0_45-b18 
Reporter: Alexander Sagen
Assignee: Shalin Shekhar Mangar
Priority: Critical
  Labels: datefield, solrcloud, stats, statscomponent

 Steps to reproduce:
 1. Download solr 4.6.0, unzip twice in different directories.
 2. Start a two-shard cluster based on default example
 {quote}
 dir1/example java -Dbootstrap_confdir=./solr/collection1/conf 
 -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
 dir2/example java -Djetty.port=7574 -DzkHost=localhost:9983 -jar start.jar
 {quote}
 3. Visit 
 http://localhost:8983/solr/query?q=author:astats=truestats.field=last_modified
 This causes a nullpointer (given that the index is empty or the query returns 
 0 rows)
 Stacktrace:
 {noformat}
 190 [qtp290025410-11] INFO  org.apache.solr.core.SolrCore  – 
 [collection1] webapp=/solr path=/query 
 params={stats.field=last_modifiedstats=trueq=author:a} hits=0 status=500 
 QTime=8 
 191 [qtp290025410-11] ERROR org.apache.solr.servlet.SolrDispatchFilter  – 
 null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:409)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:109)
 at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:113)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at java.lang.Thread.run(Thread.java:744)
 

[jira] [Assigned] (SOLR-5514) atomic update throws exception if the schema contains uuid fields: Invalid UUID String: 'java.util.UUID:e26c4d56-e98d-41de-9b7f-f63192089670'

2013-11-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5514:
---

Assignee: Shalin Shekhar Mangar

 atomic update throws exception if the schema contains uuid fields: Invalid 
 UUID String: 'java.util.UUID:e26c4d56-e98d-41de-9b7f-f63192089670'
 -

 Key: SOLR-5514
 URL: https://issues.apache.org/jira/browse/SOLR-5514
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5.1
 Environment: unix and windows
Reporter: Dirk Reuss 
Assignee: Shalin Shekhar Mangar

 I am updating an exiting document with the statement 
 adddocfield name='name' update='set'newvalue/field
 All fields are stored and I have several UUID fields. About 10-20% of the 
 update commands will fail with the message: (example)
 Invalid UUID String: 'java.util.UUID:532c9353-d391-4a04-8618-dc2fa1ef8b35'
 the point is that java.util.UUID seems to be prepended to the original uuid 
 stored in the field and when the value is written this error occours.
 I tried to check if this specific uuid field was the problem and
 added the uuid field in the update xml with(field name='id1' 
 update='set'...). But the error simply moved to an other uuid field.
 here is the original exception:
 lst name=responseHeaderint name=status500/intint 
 name=QTime34/int/lstlst name=errorstr name=msgError while 
 creating field 
 'MyUUIDField{type=uuid,properties=indexed,stored,omitTermFreqAndPositions,required,
  required=true}' from value 
 'java.util.UUID:e26c4d56-e98d-41de-9b7f-f63192089670'/strstr 
 name=traceorg.apache.solr.common.SolrException: Error while creating field 
 'MyUUIDField{type=uuid,properties=indexed,stored,omitTermFreqAndPositions,required,
  required=true}' from value 
 'java.util.UUID:e26c4d56-e98d-41de-9b7f-f63192089670'
   at org.apache.solr.schema.FieldType.createField(FieldType.java:259)
   at org.apache.solr.schema.StrField.createFields(StrField.java:56)
   at 
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:47)
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:118)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:215)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:556)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:692)
   at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:435)
   at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
   at 
 org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247)
   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
   at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
   at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
   at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
   at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
   at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
   at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
   at 
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
   at 
 

Re: Patch proposal - LanguageIdentifierUpdateProcessor uses only firstValue() on multivalued fields

2013-11-30 Thread Shalin Shekhar Mangar
Please feel free to open an issue. A patch would be great!

On Fri, Nov 29, 2013 at 4:55 PM, Müller, Stephan
muel...@ponton-consulting.de wrote:
 Hello list.

 After discussing the thread LanguageIdentifierUpdateProcessor uses only 
 firstValue() on multivalued fields on solr-user,
 I like to propose a patch to add the following feature:

 LanguageIdentifierUpdateProcessor should use all (String) values of a 
 multivalued field for language detection.

 By now, the LUP imlicitely only retieves the first-value of a multivalued 
 field.
 This leads to omitting any other values of such field. Furthermore, if for 
 some reason, the first-value is not a String but following values would be 
 Strings, there's no language detection at all for such a multi-valued field.

 I propose this patch here, following your contribution guidelines.
 It is unclear to me if this scenario was just overlooked or if this was a 
 conscious design decission.

 So, let me hear what you think of this feature. Maybe you are already working 
 on it.
 If not, I'm eager to file my (probably first) feature request and patch on 
 JIRA.
 I have a working trunk checkout in IDEA setup on OSX and ant clean install 
 claims SUCCESS.


 Looking forward to hear from you!

 Regards,
 Stephan - srm

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5457) Admin UI - Reload Core/Collection from 'Files' Page

2013-11-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835666#comment-13835666
 ] 

Erick Erickson commented on SOLR-5457:
--

[~steffkes] What's the status of this? We were also going to add a test 
reload button or something like that so people could test ahead of time. I'm 
tidying things up a bit since I'll be away from my computer for much of 
December If we cut a 4.7 it'd be good if all of this was in place.

Thanks!



 Admin UI - Reload Core/Collection from 'Files' Page
 ---

 Key: SOLR-5457
 URL: https://issues.apache.org/jira/browse/SOLR-5457
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.7


 So that the Workflow, we introduced in SOLR-5446, could be improved, we 
 should add a Reload Core (resp. Collection) Button on the 'Files' Page, so 
 that one could the changes he actually made live.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1952) @since tags missing from Javadocs

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1952.


Resolution: Won't Fix

2013 Old JIRA cleanup

 @since tags missing from Javadocs
 -

 Key: LUCENE-1952
 URL: https://issues.apache.org/jira/browse/LUCENE-1952
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/javadocs
Reporter: Chris Pilsworth
Priority: Minor

 It would be useful to be able to see at which version classes/methods have 
 been added by adding the @since javadoc tag.   I use quite an old version of 
 Lucene that is integrated into the CMS I use, and often find that they 
 features I need to use are not supported in the version I have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1444) Add option in solrconfig.xml to override the LogMergePolicy calibrateSizeByDeletes

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1444.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Add option in solrconfig.xml to override the LogMergePolicy 
 calibrateSizeByDeletes
 

 Key: SOLR-1444
 URL: https://issues.apache.org/jira/browse/SOLR-1444
 Project: Solr
  Issue Type: Improvement
  Components: update
Affects Versions: 1.4
 Environment: NA
Reporter: Jibo John
Priority: Minor

 A patch was committed in lucene  
 (http://issues.apache.org/jira/browse/LUCENE-1634) that would consider the 
 number of deleted documents as the criteria when deciding which segments to 
 merge.
 By default, calibrateSizeByDeletes = false in LogMergePolicy. So, currently, 
 there is no way in Solr to set calibrateSizeByDeletes = true.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1964) InstantiatedIndex : TermFreqVector is missing

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1964.


Resolution: Won't Fix

2013 Old JIRA cleanup

 InstantiatedIndex : TermFreqVector is missing
 -

 Key: LUCENE-1964
 URL: https://issues.apache.org/jira/browse/LUCENE-1964
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 2.9
 Environment: java 1.6
Reporter: David Causse
 Attachments: iiw-regression-fix.patch, term-vector-fix.patch


 TermFrecVector is missing when index is created via constructor.
 The constructor expect that fields with TermVector are retreived with the 
 getFields call, but this call returns only stored field, but such fields are 
 never/rarely stored.
 I've attached a patch to fix this issue.
 I had to add a int freq field to InstantiatedTermDocumentInformation because 
 we are not sure we can use the size of termPositions array as freq 
 information, this information may not be available with TermVector.YES.
 Don't know if did well but works with unit test attached.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1509) ShowFileRequestHandler has missleading error when asked for absolute path

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1509.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 ShowFileRequestHandler has missleading error when asked for absolute path
 -

 Key: SOLR-1509
 URL: https://issues.apache.org/jira/browse/SOLR-1509
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Simon Rosenthal
Priority: Minor

 When a user attempts to use the ShowFileRequestHandler (ie: /admin/file ) to 
 access a file using an absolute path (which may result from solr.xml 
 containing an absolute path for schema.xml or solrconfig.xml outside of the 
 normal conf dir) then the error message indicates that a file with the path 
 consisting of the confdir + the absolute path can't be found.  the Handler 
 should explicitly check for absolute paths (like it checks for .. and error 
 message should make it clear that absolute paths are not allowed.
 Example of current behavior...
 {noformat}
 schema path = /home/solrdata/rig1/conf/schema.xml
 url displayed in admin form = 
 http://host:port/solr/core1/admin/file/?file=/home/solrdata/rig1/conf/schema.xml
 error message: Can not find: schema.xml 
 [/path/to/core1/conf/directory/home/solrdata/rig1/conf/schema.xml]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2019) map unicode process-internal codepoints to replacement character

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2019.


Resolution: Won't Fix

2013 Old JIRA cleanup

 map unicode process-internal codepoints to replacement character
 

 Key: LUCENE-2019
 URL: https://issues.apache.org/jira/browse/LUCENE-2019
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Robert Muir
Priority: Minor
 Attachments: LUCENE-2019.patch


 A spinoff from LUCENE-2016.
 There are several process-internal codepoints in unicode, we should not store 
 these in the index.
 Instead they should be mapped to replacement character (U+FFFD), so they can 
 be used process-internally.
 An example of this is how Lucene Java currently uses U+ 
 process-internally, it can't be in the index or will cause problems. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1514) Facet search results contain 0:0 entries although '0' values were not indexed.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1514.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Facet search results contain 0:0 entries although '0' values were not indexed.
 --

 Key: SOLR-1514
 URL: https://issues.apache.org/jira/browse/SOLR-1514
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.3
 Environment: Solr is on: Linux  2.6.18-92.1.13.el5xen
Reporter: Renata Perkowska

 Hi,
 in my Jmeter  ATs  I can see that under some circumstances facet search 
 results contain '0' both as keys
 and values for the integer field called 'year' although I never index zeros. 
 When I do a normal search, I don't see any indexed fields with zeros. 
 When I run my facet test (using JMeter) in isolation, everything works fine. 
 It happens only when it's being run after other tests
 (and other indexing/deleting). On the other hand it shouldn't be the case 
 that other indexing are influencing this test, as at the end of each test I'm 
 deleting
 indexed documents so before running the facet test an index is empty.
 My facet test looks as follows:
  1. Index group of documents
  2. Perform search on facets
  3. Remove documents from the index.
 The results that I'm getting for an integer field 'year':
  1990:4
  1995:4
  0:0
  1991:0
  1992:0
  1993:0
  1994:0
  1996:0
  1997:0
  1998:0
 I'm indexing only values 1990-1999, so there certainly shouldn't be any '0'  
 as keys in the result set.
 The indexed is being optimized not after each document deletion from and 
 index, but only when an index is loaded/unloaded, so the optimization won't 
 solve the problem in this case. 
 If the facet.mincount0 is provided, then  I'm not getting 0:0, but other 
 entries with '0' values are gone as well:
 1990:4
 1995:4
 I'm also indexing text fields, but I don't see a similar situation in this 
 case. This bug only happens for integer fields.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1508) Use field cache when creating response, if available and configured.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1508.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Use field cache when creating response, if available and configured.
 

 Key: SOLR-1508
 URL: https://issues.apache.org/jira/browse/SOLR-1508
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
Reporter: Tom Hill

 Allow the user to configure a field to be returned to the user from the field 
 cache, instead of getting the field from disk. Relies on lucene-1981



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2009) task.mem should be set to use jvmargs that pin the min and max heap size

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2009.


Resolution: Won't Fix

2013 Old JIRA cleanup

 task.mem should be set to use jvmargs that pin the min and max heap size
 

 Key: LUCENE-2009
 URL: https://issues.apache.org/jira/browse/LUCENE-2009
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/benchmark
Affects Versions: 2.9
Reporter: Mark Miller
Priority: Minor

 Currently, task.mem sets the java ant task param maxmemory - there is no 
 equivalent minmemory though. jvmargs should be used instead, and xms,xmx 
 pinned to task.mem - otherwise, results are affected as the JVM resizes the 
 heap. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1997.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Explore performance of multi-PQ vs single-PQ sorting API
 

 Key: LUCENE-1997
 URL: https://issues.apache.org/jira/browse/LUCENE-1997
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.9
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
 LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
 LUCENE-1997.patch, LUCENE-1997.patch


 Spinoff from recent lucene 2.9 sorting algorithm thread on java-dev,
 where a simpler (non-segment-based) comparator API is proposed that
 gathers results into multiple PQs (one per segment) and then merges
 them in the end.
 I started from John's multi-PQ code and worked it into
 contrib/benchmark so that we could run perf tests.  Then I generified
 the Python script I use for running search benchmarks (in
 contrib/benchmark/sortBench.py).
 The script first creates indexes with 1M docs (based on
 SortableSingleDocSource, and based on wikipedia, if available).  Then
 it runs various combinations:
   * Index with 20 balanced segments vs index with the normal log
 segment size
   * Queries with different numbers of hits (only for wikipedia index)
   * Different top N
   * Different sorts (by title, for wikipedia, and by random string,
 random int, and country for the random index)
 For each test, 7 search rounds are run and the best QPS is kept.  The
 script runs singlePQ then multiPQ, and records the resulting best QPS
 for each and produces table (in Jira format) as output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2033) exposed MultiTermDocs and MultiTermPositions from package protected to public

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2033.


Resolution: Won't Fix

2013 Old JIRA cleanup

 exposed MultiTermDocs and MultiTermPositions from package protected to public
 -

 Key: LUCENE-2033
 URL: https://issues.apache.org/jira/browse/LUCENE-2033
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.9
Reporter: John Wang

 making these classes public can help classes that extends MultiReader.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5457) Admin UI - Reload Core/Collection from 'Files' Page

2013-11-30 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835678#comment-13835678
 ] 

Stefan Matheis (steffkes) commented on SOLR-5457:
-

will see that i get this done in the next week, at least the core thing - the 
whole collection handling isn't there yet and i'm not sure that this is the 
right place to start with. a button to test the configuration should be doable 
was well (:

 Admin UI - Reload Core/Collection from 'Files' Page
 ---

 Key: SOLR-5457
 URL: https://issues.apache.org/jira/browse/SOLR-5457
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
 Fix For: 5.0, 4.7


 So that the Workflow, we introduced in SOLR-5446, could be improved, we 
 should add a Reload Core (resp. Collection) Button on the 'Files' Page, so 
 that one could the changes he actually made live.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5515) NPE when getting stats on date field with empty result on solrcloud

2013-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835679#comment-13835679
 ] 

ASF subversion and git services commented on SOLR-5515:
---

Commit 1546725 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1546725 ]

SOLR-5515: NPE when getting stats on date field with empty result on SolrCloud

 NPE when getting stats on date field with empty result on solrcloud
 ---

 Key: SOLR-5515
 URL: https://issues.apache.org/jira/browse/SOLR-5515
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5.1, 4.6
 Environment: ubuntu 13.04, java 1.7.0_45-b18 
Reporter: Alexander Sagen
Assignee: Shalin Shekhar Mangar
Priority: Critical
  Labels: datefield, solrcloud, stats, statscomponent

 Steps to reproduce:
 1. Download solr 4.6.0, unzip twice in different directories.
 2. Start a two-shard cluster based on default example
 {quote}
 dir1/example java -Dbootstrap_confdir=./solr/collection1/conf 
 -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
 dir2/example java -Djetty.port=7574 -DzkHost=localhost:9983 -jar start.jar
 {quote}
 3. Visit 
 http://localhost:8983/solr/query?q=author:astats=truestats.field=last_modified
 This causes a nullpointer (given that the index is empty or the query returns 
 0 rows)
 Stacktrace:
 {noformat}
 190 [qtp290025410-11] INFO  org.apache.solr.core.SolrCore  – 
 [collection1] webapp=/solr path=/query 
 params={stats.field=last_modifiedstats=trueq=author:a} hits=0 status=500 
 QTime=8 
 191 [qtp290025410-11] ERROR org.apache.solr.servlet.SolrDispatchFilter  – 
 null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:409)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:109)
 at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:113)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 

[jira] [Commented] (SOLR-5515) NPE when getting stats on date field with empty result on solrcloud

2013-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835680#comment-13835680
 ] 

ASF subversion and git services commented on SOLR-5515:
---

Commit 1546728 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1546728 ]

SOLR-5515: NPE when getting stats on date field with empty result on SolrCloud

 NPE when getting stats on date field with empty result on solrcloud
 ---

 Key: SOLR-5515
 URL: https://issues.apache.org/jira/browse/SOLR-5515
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5.1, 4.6
 Environment: ubuntu 13.04, java 1.7.0_45-b18 
Reporter: Alexander Sagen
Assignee: Shalin Shekhar Mangar
Priority: Critical
  Labels: datefield, solrcloud, stats, statscomponent

 Steps to reproduce:
 1. Download solr 4.6.0, unzip twice in different directories.
 2. Start a two-shard cluster based on default example
 {quote}
 dir1/example java -Dbootstrap_confdir=./solr/collection1/conf 
 -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
 dir2/example java -Djetty.port=7574 -DzkHost=localhost:9983 -jar start.jar
 {quote}
 3. Visit 
 http://localhost:8983/solr/query?q=author:astats=truestats.field=last_modified
 This causes a nullpointer (given that the index is empty or the query returns 
 0 rows)
 Stacktrace:
 {noformat}
 190 [qtp290025410-11] INFO  org.apache.solr.core.SolrCore  – 
 [collection1] webapp=/solr path=/query 
 params={stats.field=last_modifiedstats=trueq=author:a} hits=0 status=500 
 QTime=8 
 191 [qtp290025410-11] ERROR org.apache.solr.servlet.SolrDispatchFilter  – 
 null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:409)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:109)
 at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:113)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1075 - Failure!

2013-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1075/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 9575 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:420: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:400: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:39: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/extra-targets.xml:37: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:189: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/common-build.xml:491: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/common-build.xml:413: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/common-build.xml:359: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/common-build.xml:379: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:359: 
impossible to resolve dependencies:
resolve failed - see output for details

Total time: 52 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-375) fish*~ parses to PrefixQuery - should be a parse exception

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-375.
---

Resolution: Won't Fix

2013 Old JIRA cleanup

 fish*~ parses to PrefixQuery - should be a parse exception
 --

 Key: LUCENE-375
 URL: https://issues.apache.org/jira/browse/LUCENE-375
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 1.4
 Environment: Operating System: other
 Platform: Other
Reporter: Erik Hatcher
Assignee: Luis Alves
Priority: Minor

 QueryParser parses fish*~ into a fish* PrefixQuery and silently drops the 
 ~.  This really should be a 
 parse exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2800) Search Index Generation fails

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2800.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Search Index Generation fails
 -

 Key: LUCENE-2800
 URL: https://issues.apache.org/jira/browse/LUCENE-2800
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.0.0
 Environment: Windows Server 2003 
Reporter: Sunitha Belavagi

 Hi,
 We are using lucene 2.0.0 for search index In our Comergent application
  It was working fine since from more than 3 years. 
 From this week, it is throwing Exception while creating New Index and also 
 for Incremental Index.
 Below is the exception
 com.comergent.api.appservices.productService.ProductServiceException: 
 java.io.IOException: Cannot delete 
 ...\searchIndex\en_US\MasterIndex_602580\segments 
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.indexPCFromCache(CatalogIndexSetBuilder.java:634)
  
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.buildIndexSet(CatalogIndexSetBuilder.java:276)
  
   at 
 com.comergent.appservices.search.indexBuilder.IndexSetBuilder$BuilderThread.run(IndexSetBuilder.java:469)
  
 Caused by: java.io.IOException: Cannot delete 
 searchIndex\en_US\MasterIndex_602580\segments 
   at org.apache.lucene.store.FSDirectory.renameFile(FSDirectory.java:268) 
   at org.apache.lucene.index.SegmentInfos.write(SegmentInfos.java:95) 
   at org.apache.lucene.index.IndexWriter$4.doBody(IndexWriter.java:726) 
   at org.apache.lucene.store.Lock$With.run(Lock.java:99) 
   at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:724) 
   at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:686) 
   at 
 org.apache.lucene.index.IndexWriter.maybeMergeSegments(IndexWriter.java:674) 
   at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:479) 
   at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:462) 
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.indexPCFromCache(CatalogIndexSetBuilder.java:630)
  
   ... 2 more 
 2010.12.05 06:25:13:532 Env/Thread-21961:ERROR:CatalogIndexSetBuilder 
 CatalogIndexSetBuilder: [MasterIndex_602580] - Exception: 
 com.comergent.api.appservices.productService.ProductServiceException: 
 java.io.IOException: Cannot delete ...\MasterIndex_602580\segments
 2010.12.05 06:25:13:532 Env/Thread-21961:INFO:CMGT_SEARCH 
 IndexSetBuilder$BuilderThread: error building the index for: 
 MasterIndex_602580
 com.comergent.api.exception.ComergentException: 
 com.comergent.api.appservices.productService.ProductServiceException: 
 java.io.IOException: Cannot delete 
 \searchIndex\en_US\MasterIndex_602580\segments
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.buildIndexSet(CatalogIndexSetBuilder.java:305)
   at 
 com.comergent.appservices.search.indexBuilder.IndexSetBuilder$BuilderThread.run(IndexSetBuilder.java:469)
 Caused by: 
 com.comergent.api.appservices.productService.ProductServiceException: 
 java.io.IOException: Cannot delete ...\MasterIndex_602580\segments
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.indexPCFromCache(CatalogIndexSetBuilder.java:634)
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.buildIndexSet(CatalogIndexSetBuilder.java:276)
   ... 1 more
 Caused by: java.io.IOException: Cannot delete ...\MasterIndex_602580\segments
   at org.apache.lucene.store.FSDirectory.renameFile(FSDirectory.java:268)
   at org.apache.lucene.index.SegmentInfos.write(SegmentInfos.java:95)
   at org.apache.lucene.index.IndexWriter$4.doBody(IndexWriter.java:726)
   at org.apache.lucene.store.Lock$With.run(Lock.java:99)
   at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:724)
   at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:686)
   at 
 org.apache.lucene.index.IndexWriter.maybeMergeSegments(IndexWriter.java:674)
   at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:479)
   at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:462)
   at 
 com.comergent.reference.appservices.productService.search.indexBuilder.CatalogIndexSetBuilder.indexPCFromCache(CatalogIndexSetBuilder.java:630)
   ... 2 more
 2010.12.05 06:25:13:938 Env/http-8080-Processor75:INFO:CMGT_SEARCH 
 IndexSetBuilder: error building the index: 
 

[jira] [Resolved] (LUCENE-2263) Deadlock with FSIndexInput and SegmentReader

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2263.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Deadlock with FSIndexInput and SegmentReader 
 -

 Key: LUCENE-2263
 URL: https://issues.apache.org/jira/browse/LUCENE-2263
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.2
 Environment: bash-3.00$ cat /etc/release
   Solaris 10 10/08 s10s_u6wos_07b SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
 Use is subject to license terms.
 Assembled 27 October 2008
 bash-3.00$ uname -a
 SunOS op06udb1 5.10 Generic_13-03 sun4v sparc SUNW,Sun-Blade-T6340 
Reporter: Antonio Martinez
Priority: Minor

 See http://issues.apache.org/jira/browse/JCR-2426 - Issue seen with 
 Jackrabbit 1.4.4 and lucene 2.2.0
 There is a deadlock but it is not visible in the dump what thread is holding 
 the lock, which indicates a VM thread is holding it.
 The class involved FSIndexInput uses Descriptor class, which has 
 overridden the finalize method and eventually calls RandomAccessFile close 
 and finalize methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-373) Query parts ending with a colon are handled badly

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-373.
---

Resolution: Won't Fix

2013 Old JIRA cleanup

 Query parts ending with a colon are handled badly
 -

 Key: LUCENE-373
 URL: https://issues.apache.org/jira/browse/LUCENE-373
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 1.4
 Environment: Operating System: Windows 2000
 Platform: PC
Reporter: Andrew Stevens
Priority: Minor
  Labels: newdev

 I'm using Lucene 1.4.3, running
 Query query = QueryParser.parse(queryString, contents, new 
 StandardAnalyzer());
 If queryString is search title: i.e. specifying a field name without a
 corresponding value, I get a parsing exception:
 Encountered EOF at line 1, column 8.
 Was expecting one of:
 ( ...
 QUOTED ...
 TERM ...
 PREFIXTERM ...
 WILDTERM ...
 [ ...
 { ...
 NUMBER ...
 If queryString is title: search, there's no exception.  However, the parsed
 query which is returned is title:search.  If queryString is title: 
 contents:
 text, the parsed query is title:contents and the text part is ignored
 completely.  When queryString is title: text contents: the above exception 
 is
 produced again.
 This seems inconsistent.  Given that it's pointless searching for an empty
 string (since it has no tokens), I'd expect both search title:  title:
 search to be parsed as search (or, given the default field I specified,
 contents:search), and title: contents: text  title: text contents: to
 parse as text (contents:text) i.e. parts which have no term are ignored.  
 At
 worst I'd expect them all to throw a ParseException rather than just the ones
 with the colon at the end of the string.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1632) boolean docid set iterator improvement

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1632.


Resolution: Won't Fix

2013 Old JIRA cleanup

 boolean docid set iterator improvement
 --

 Key: LUCENE-1632
 URL: https://issues.apache.org/jira/browse/LUCENE-1632
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.4
Reporter: John Wang
 Attachments: Lucene-1632-patch.txt


 This was first brought up in Lucene-1345. But Lucene-1345 conversation has 
 digressed. As per suggested, creating a separate issue to track.
 Added perf comparisons with boolean set iterators with current scorers
 See patch
 System: Ubunto, 
 java version 1.6.0_11
 Intel core2 Duo 2.44ghz
 new milliseconds=470
 new milliseconds=534
 new milliseconds=450
 new milliseconds=443
 new milliseconds=444
 new milliseconds=445
 new milliseconds=449
 new milliseconds=441
 new milliseconds=444
 new milliseconds=445
 new total milliseconds=4565
 old milliseconds=529
 old milliseconds=491
 old milliseconds=428
 old milliseconds=549
 old milliseconds=427
 old milliseconds=424
 old milliseconds=420
 old milliseconds=424
 old milliseconds=423
 old milliseconds=422
 old total milliseconds=4537
 New/Old Time 4565/4537 (100.61715%)
 OrDocIdSetIterator milliseconds=1138
 OrDocIdSetIterator milliseconds=1106
 OrDocIdSetIterator milliseconds=1065
 OrDocIdSetIterator milliseconds=1066
 OrDocIdSetIterator milliseconds=1065
 OrDocIdSetIterator milliseconds=1067
 OrDocIdSetIterator milliseconds=1072
 OrDocIdSetIterator milliseconds=1118
 OrDocIdSetIterator milliseconds=1065
 OrDocIdSetIterator milliseconds=1069
 OrDocIdSetIterator total milliseconds=10831
 DisjunctionMaxScorer milliseconds=1914
 DisjunctionMaxScorer milliseconds=1981
 DisjunctionMaxScorer milliseconds=1861
 DisjunctionMaxScorer milliseconds=1893
 DisjunctionMaxScorer milliseconds=1886
 DisjunctionMaxScorer milliseconds=1885
 DisjunctionMaxScorer milliseconds=1887
 DisjunctionMaxScorer milliseconds=1889
 DisjunctionMaxScorer milliseconds=1891
 DisjunctionMaxScorer milliseconds=1888
 DisjunctionMaxScorer total milliseconds=18975
 Or/DisjunctionMax Time 10831/18975 (57.080368%)
 OrDocIdSetIterator milliseconds=1079
 OrDocIdSetIterator milliseconds=1075
 OrDocIdSetIterator milliseconds=1076
 OrDocIdSetIterator milliseconds=1093
 OrDocIdSetIterator milliseconds=1077
 OrDocIdSetIterator milliseconds=1074
 OrDocIdSetIterator milliseconds=1078
 OrDocIdSetIterator milliseconds=1075
 OrDocIdSetIterator milliseconds=1074
 OrDocIdSetIterator milliseconds=1074
 OrDocIdSetIterator total milliseconds=10775
 DisjunctionSumScorer milliseconds=1398
 DisjunctionSumScorer milliseconds=1322
 DisjunctionSumScorer milliseconds=1320
 DisjunctionSumScorer milliseconds=1305
 DisjunctionSumScorer milliseconds=1304
 DisjunctionSumScorer milliseconds=1301
 DisjunctionSumScorer milliseconds=1304
 DisjunctionSumScorer milliseconds=1300
 DisjunctionSumScorer milliseconds=1301
 DisjunctionSumScorer milliseconds=1317
 DisjunctionSumScorer total milliseconds=13172
 Or/DisjunctionSum Time 10775/13172 (81.80231%)
 AndDocIdSetIterator milliseconds=330
 AndDocIdSetIterator milliseconds=336
 AndDocIdSetIterator milliseconds=298
 AndDocIdSetIterator milliseconds=299
 AndDocIdSetIterator milliseconds=310
 AndDocIdSetIterator milliseconds=298
 AndDocIdSetIterator milliseconds=298
 AndDocIdSetIterator milliseconds=334
 AndDocIdSetIterator milliseconds=298
 AndDocIdSetIterator milliseconds=299
 AndDocIdSetIterator total milliseconds=3100
 ConjunctionScorer milliseconds=332
 ConjunctionScorer milliseconds=307
 ConjunctionScorer milliseconds=302
 ConjunctionScorer milliseconds=350
 ConjunctionScorer milliseconds=300
 ConjunctionScorer milliseconds=304
 ConjunctionScorer milliseconds=305
 ConjunctionScorer milliseconds=303
 ConjunctionScorer milliseconds=303
 ConjunctionScorer milliseconds=299
 ConjunctionScorer total milliseconds=3105
 And/Conjunction Time 3100/3105 (99.83897%)
 main contributors to the patch: Anmol Bhasin  Yasuhiro Matsuda



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-993) MultiFieldQueryParser doesn't process search strings containing field references correctly when BooleanClause.Occur.MUST_NOT is used

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-993.
---

Resolution: Won't Fix

2013 Old JIRA cleanup

 MultiFieldQueryParser doesn't process search strings containing field 
 references correctly when BooleanClause.Occur.MUST_NOT is used
 

 Key: LUCENE-993
 URL: https://issues.apache.org/jira/browse/LUCENE-993
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser, core/search
Affects Versions: 2.2
Reporter: Anthony Yeracaris
 Attachments: MultiFieldQueryParserBug.java


 Below, and attached, is a complete java program illustrating this bug.
 In this program,I have an allowed field and a restricted field.  The user 
 is not permitted to search the restricted field.  However, if the user 
 provides the search string allowed:value, then the MultiFieldQueryParser 
 returns allowed:valu -allowed:valu, which has the effect of finding nothing.
 In the case the user provides a search string containing field references, I 
 would expect the parser to use the field and occur arrays as constraints.  In 
 other words, if a the user mentions a field that has an occur of MUST_NOT, 
 then that field should be elided from the search. At the end of parsing, 
 there must be at least one search term, and all MUST fields must be present.
 import org.apache.lucene.queryParser.MultiFieldQueryParser;
 import org.apache.lucene.queryParser.ParseException;
 import org.apache.lucene.search.BooleanClause;
 import org.apache.lucene.analysis.snowball.SnowballAnalyzer;
 public class MultiFieldQueryParserBug {
   public static void main(String[] argv) {
 try
 {
   System.out.println(MultiFieldQueryParser.parse(allowed:value,
   new String[]{allowed, restricted},
   new BooleanClause.Occur[]{BooleanClause.Occur.SHOULD, 
 BooleanClause.Occur.MUST_NOT},
   new SnowballAnalyzer(English)));
   // Output is:
   // allowed:valu -allowed:valu
 }
 catch (ParseException e)
 {
   e.printStackTrace();  // generated
 }
   }
 }



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2280) IndexWriter.optimize() throws NullPointerException

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2280.


Resolution: Won't Fix

2013 Old JIRA cleanup

 IndexWriter.optimize() throws NullPointerException
 --

 Key: LUCENE-2280
 URL: https://issues.apache.org/jira/browse/LUCENE-2280
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.3.2
 Environment: Win 2003, lucene version 2.3.2, IBM JRE 1.6
Reporter: Ritesh Nigam
  Labels: IndexWriter, NPE, optimize
 Attachments: LuceneUtils.zip, lucene.jar, lucene.zip


 I am using lucene 2.3.2 search APIs for my application, i am indexing 45GB 
 database which creates approax 200MB index file, after finishing the indexing 
 and while running optimize() i can see NullPointerExcception thrown in my log 
 and index file is getting corrupted, log says
 
 Caused by: 
 java.lang.NullPointerException
   at 
 org.apache.lucene.store.BufferedIndexOutput.writeBytes(BufferedIndexOutput.java:49)
   at org.apache.lucene.store.IndexOutput.writeBytes(IndexOutput.java:40)
   at 
 org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:566)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:135)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3273)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:2968)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:240)
 
 and this is happening quite frequently, although I am not able to reproduce 
 it on demand, I saw an issue logged which is some what related to mine issue 
 (http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200809.mbox/%3c6e4a40db-5efc-42da-a857-d59f4ec34...@mikemccandless.com%3E)
  but the only difference here is I am not using Store.Compress for my fields, 
 i am using Store.NO instead. please note that I am using IBM JRE for my 
 application.
 Is this an issue with lucene?, if yes it is fixed in which version?



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2263) Deadlock with FSIndexInput and SegmentReader

2013-11-30 Thread Antonio Martinez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835688#comment-13835688
 ] 

Antonio Martinez commented on LUCENE-2263:
--

I am on vacation until Dec 2nd.
I will respond to your email as soon as I get back.



 Deadlock with FSIndexInput and SegmentReader 
 -

 Key: LUCENE-2263
 URL: https://issues.apache.org/jira/browse/LUCENE-2263
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.2
 Environment: bash-3.00$ cat /etc/release
   Solaris 10 10/08 s10s_u6wos_07b SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
 Use is subject to license terms.
 Assembled 27 October 2008
 bash-3.00$ uname -a
 SunOS op06udb1 5.10 Generic_13-03 sun4v sparc SUNW,Sun-Blade-T6340 
Reporter: Antonio Martinez
Priority: Minor

 See http://issues.apache.org/jira/browse/JCR-2426 - Issue seen with 
 Jackrabbit 1.4.4 and lucene 2.2.0
 There is a deadlock but it is not visible in the dump what thread is holding 
 the lock, which indicates a VM thread is holding it.
 The class involved FSIndexInput uses Descriptor class, which has 
 overridden the finalize method and eventually calls RandomAccessFile close 
 and finalize methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2511) OutOfMemoryError should not be wrapped in an IllegalStateException, as it is misleading for fault-tolerant programs

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2511.


Resolution: Won't Fix

2013 Old JIRA cleanup

 OutOfMemoryError should not be wrapped in an IllegalStateException, as it is 
 misleading for fault-tolerant programs
 ---

 Key: LUCENE-2511
 URL: https://issues.apache.org/jira/browse/LUCENE-2511
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.4.1
Reporter: David Sitsky
Priority: Minor

 I have a program, which does explicit commits.  On one occasion, I saw the 
 following exception thrown:
 java.lang.IllegalStateException: this writer hit an OutOfMemoryError; cannot 
 commit
 at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:4061)
 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4136)
 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4114)
 In our program, we treat all errors as fatal and terminate the program (and 
 restart).  Runtime exceptions are sometimes handled differently, since they 
 are usually indicative of a programming bug that might be recoverable. in 
 some situations.
 I think the OutOfMemoryError should not be wrapped as a runtime exception.. 
 as this can mask a serious issue from a fault-tolerant application.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1515) Improved(?) Swedish snowball stemmer

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1515.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Improved(?) Swedish snowball stemmer
 

 Key: LUCENE-1515
 URL: https://issues.apache.org/jira/browse/LUCENE-1515
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Affects Versions: 2.4
Reporter: Karl Wettin
 Attachments: LUCENE-1515.txt


 Snowball stemmer for Swedish lacks support for '-an' and '-ans' related 
 suffix stripping, ending up with non compatible stems for example klocka, 
 klockor, klockornas, klockAN, klockANS.  Complete list of new suffix 
 stripping rules:
 {pre}
 'an' 'anen' 'anens' 'anare' 'aner' 'anerna' 'anernas'
 'ans' 'ansen' 'ansens' 'anser' 'ansera'  'anserar' 'anserna' 
 'ansernas'
 'iera'
 (delete)
 {pre}
 The problem is all the exceptions (e.g. svans|svan, finans|fin, nyans|ny) and 
 this is an attempt at solving that problem. The rules and exceptions are 
 based on the [SAOL|http://en.wikipedia.org/wiki/Svenska_Akademiens_Ordlista] 
 entries suffixed with 'an' and 'ans'. There a few known problematic stemming 
 rules but seems to work quite a bit better than the current SwedishStemmer. 
 It would not be a bad idea to check all of SAOL entries in order to make sure 
 the integrity of the rules.
 My Snowball syntax skills are rather limited so I'm certain the code could be 
 optimized quite a bit.
 *The code is released under BSD and not ASL*. I've been posting a bit in the 
 Snowball forum and privatly to Martin Porter himself but never got any 
 response so now I post it here instead in hope for some momentum.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1934) Rework (Float)LatLng implementation and distance calculation

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1934.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Rework (Float)LatLng implementation and distance calculation
 

 Key: LUCENE-1934
 URL: https://issues.apache.org/jira/browse/LUCENE-1934
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 2.9
Reporter: Nicolas Helleringer
Priority: Minor
 Attachments: LUCENE-1934.patch


 Clean up of the code and normalisation of the distance calculation to standard



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2583) Transparent chunk transformation (compression) of index directory

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2583.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Transparent chunk transformation (compression) of index directory
 -

 Key: LUCENE-2583
 URL: https://issues.apache.org/jira/browse/LUCENE-2583
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Affects Versions: 2.4.1, 2.9, 2.9.1, 2.9.2, 2.9.3, 3.0, 3.0.1, 3.0.2
Reporter: Mitja Lenič
Priority: Minor

 In some cases user is willing to sacrifice speed for space efficiency or 
 better data security.  I've developed driver for Directory that enables 
 transparent compression (any transformation) of directory files, by using 
 decorator pattern.  With current experience, compression ratios are between 
 1:5 to 1:10, which depends on type of data stored in index. 
 Directory files are sliced into fixed chunks, each chunk separately 
 transformed (eg. compressed, encrypted, ...) and written to supporting 
 (nested) directory for storage. 
 I've create project page at [http://code.google.com/p/lucenetransform/] and 
 am also prepared to join contrib/store.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1799) Unicode compression

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1799.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Unicode compression
 ---

 Key: LUCENE-1799
 URL: https://issues.apache.org/jira/browse/LUCENE-1799
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/store
Affects Versions: 2.4.1
Reporter: DM Smith
Priority: Minor
 Attachments: Benchmark.java, Benchmark.java, Benchmark.java, 
 LUCENE-1779.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
 LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
 LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, 
 LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799.patch, LUCENE-1799_big.patch


 In lucene-1793, there is the off-topic suggestion to provide compression of 
 Unicode data. The motivation was a custom encoding in a Russian analyzer. The 
 original supposition was that it provided a more compact index.
 This led to the comment that a different or compressed encoding would be a 
 generally useful feature. 
 BOCU-1 was suggested as a possibility. This is a patented algorithm by IBM 
 with an implementation in ICU. If Lucene provide it's own implementation a 
 freely avIlable, royalty-free license would need to be obtained.
 SCSU is another Unicode compression algorithm that could be used. 
 An advantage of these methods is that they work on the whole of Unicode. If 
 that is not needed an encoding such as iso8859-1 (or whatever covers the 
 input) could be used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2081) CartesianShapeFilter improvements

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2081.


Resolution: Won't Fix

2013 Old JIRA cleanup

 CartesianShapeFilter improvements
 -

 Key: LUCENE-2081
 URL: https://issues.apache.org/jira/browse/LUCENE-2081
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Affects Versions: 2.9
Reporter: Grant Ingersoll
Priority: Minor

 The CartesiahShapeFilter could use some improvements.  For starters, if we 
 make sure the boxIds are sorted in index order, this should reduce the cost 
 of seeks.  I think we should also replace the logging with a similar approach 
 to Lucene's output stream.  We also can do Term creation a tad bit more 
 efficiently too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1765) incorrect doc description of fielded query syntax

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1765.


Resolution: Won't Fix

2013 Old JIRA cleanup

 incorrect doc description of fielded query syntax
 -

 Key: LUCENE-1765
 URL: https://issues.apache.org/jira/browse/LUCENE-1765
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/website
Affects Versions: 2.4.1
 Environment: lucene.apache.org docs
Reporter: solrize
Priority: Minor

 http://lucene.apache.org/java/2_4_1/queryparsersyntax.html#Fields says:
   You can search any field by typing the field name followed by a colon : 
 and then the term you are looking for. 
 This is slightly incomplete since the stuff after the fieldname can be a more 
 complex query, not necessarily a term.  For example, 
 title:(do it right)
 seems to work when I tried it.  It would be good if the doc was updated to 
 describe the syntax exactly.
 Also, documentation should be one of the components selectable in bug 
 reports.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-2518) Make check of BooleanClause.Occur[] in MultiFieldQueryParser.parse less stubborn

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-2518.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Make check of BooleanClause.Occur[] in MultiFieldQueryParser.parse less 
 stubborn
 

 Key: LUCENE-2518
 URL: https://issues.apache.org/jira/browse/LUCENE-2518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Affects Versions: 2.9, 2.9.1, 2.9.2, 2.9.3, 3.0, 3.0.1, 3.0.2
Reporter: Itamar Syn-Hershko
Priority: Minor

 Update the check in:
   public static Query parse(Version matchVersion, String query, String[] 
 fields,
   BooleanClause.Occur[] flags, Analyzer analyzer) throws ParseException {
 if (fields.length != flags.length)
   throw new IllegalArgumentException(fields.length != flags.length);
 To be:
 if (fields.length  flags.length)
 So the consumer can use one Occur array and apply fields selectively. The 
 only danger here is with hitting a non-existent cell in flags, and this check 
 will provide this just as well without limiting usability for such cases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1930) Scale moderator not in line with sinusoidal projector

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1930.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Scale moderator not in line with sinusoidal projector
 -

 Key: LUCENE-1930
 URL: https://issues.apache.org/jira/browse/LUCENE-1930
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 2.9, 2.9.1, 3.0
Reporter: Nicolas Helleringer
Assignee: Chris Male
 Attachments: LUCENE-1930.patch


 Current implementation in spatial Lucene does :
 public double getTierBoxId (double latitude, double longitude) {
   double[] coords = projector.coords(latitude, longitude);
   double id = getBoxId(coords[0]) + (getBoxId(coords[1]) / 
 tierVerticalPosDivider);
   return id ;
 }
 private double getBoxId (double coord){
   return Math.floor(coord / (idd / this.tierLength));
 }
 with
 Double idd = new Double(180);
 in the CartesianTierPlotter constructor.
 But current Sinusoidal Projector set and used in initialisation of 
 CartesianTierPlotter does :
 public double[] coords(double latitude, double longitude) {
   double rlat = Math.toRadians(latitude);
   double rlong = Math.toRadians(longitude);
   double nlat = rlong * Math.cos(rlat);
   double r[] = {nlat, rlong};
 return r;
 }
 Thus we moderate with idd = 180 a coord that is in a Radian space.
 Things to do :
 1°) fix idd to : double idd= PI;
 2°) Move idd definition to IProjector interface : The coord space should 
 belong to the projector doing the job. Change the code from CTP to use that 
 new interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1815) Geohash encode/decode floating point problems

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1815.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Geohash encode/decode floating point problems
 -

 Key: LUCENE-1815
 URL: https://issues.apache.org/jira/browse/LUCENE-1815
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 2.9
Reporter: Wouter Heijke
Priority: Minor

 i'm finding the Geohash support in the spatial package to be rather 
 unreliable.
 Here is the outcome of a test that encodes/decodes the same lat/lon and 
 geohash a few times.
 the format:
 action geohash=(latitude, longitude)
 the result:
 encode u173zq37x014=(52.3738007,4.8909347)
 decode u173zq37x014=(52.3737996,4.890934)
 encode u173zq37rpbw=(52.3737996,4.890934)
 decode u173zq37rpbw=(52.3737996,4.89093295)
 encode u173zq37qzzy=(52.3737996,4.89093295)
 if I now change to the google code implementation:
 encode u173zq37x014=(52.3738007,4.8909347)
 decode u173zq37x014=(52.37380061298609,4.890934377908707)
 encode u173zq37x014=(52.37380061298609,4.890934377908707)
 decode u173zq37x014=(52.37380061298609,4.890934377908707)
 encode u173zq37x014=(52.37380061298609,4.890934377908707)
 Note the differences between the geohashes in both situations and the 
 lat/lon's!
 Now things get worse if you work on low-precision geohashes:
 decode u173=(52.0,4.0)
 encode u14zg429yy84=(52.0,4.0)
 decode u14zg429yy84=(52.0,3.99)
 encode u14zg429ywx6=(52.0,3.99)
 and google:
 decode u173=(52.20703125,4.5703125)
 encode u173=(52.20703125,4.5703125)
 decode u173=(52.20703125,4.5703125)
 encode u173=(52.20703125,4.5703125)
 We are using geohashes extensively and will now use the google code version 
 unfortunately.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1777) Error on distance query where miles 1.0

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1777.


Resolution: Won't Fix

2013 Old JIRA cleanup

 Error on distance query where miles  1.0
 -

 Key: LUCENE-1777
 URL: https://issues.apache.org/jira/browse/LUCENE-1777
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 2.9
Reporter: Glen Stampoultzis
Assignee: Chris Male
 Attachments: LUCENE-1777.patch


 If miles is under 1.0 distance query will break.
 To reproduce modify the file
 http://svn.apache.org/viewvc/lucene/java/trunk/contrib/spatial/src/test/org/apache/lucene/spatial/tier/TestCartesian.java?revision=794721
 And set the line:
 final double miles = 6.0;
 to 
 final double miles = 0.5;



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-1914) allow for custom segment files

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-1914.


Resolution: Won't Fix

2013 Old JIRA cleanup

 allow for custom segment files
 --

 Key: LUCENE-1914
 URL: https://issues.apache.org/jira/browse/LUCENE-1914
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 2.9
Reporter: John Wang

 Create a plugin framework where one can provide some sort of callback to add 
 to a custom segment file, given a doc and provide some sort of merge logic. 
 This is in light of the flexible indexing effort.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



please put the jiras back?

2013-11-30 Thread Robert Muir
Erick, why did you mark a ton of LUCENE issues won't fix?

Just because they've been open for a while?

Can you please put these back? Closing bugs like
https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
has fixed them yet, i dont think that helps anything.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1899) dates prior to 1000AD are not formatted properly in responses

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1899.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 dates prior to 1000AD are not formatted properly in responses
 -

 Key: SOLR-1899
 URL: https://issues.apache.org/jira/browse/SOLR-1899
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.1.0, 1.2, 1.3, 1.4
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-1899.patch


 As noted on the mailing list, if a document is added to solr with a date 
 field such as 0001-01-01T00:00:00Z then when that document is returned by a 
 search the year will be improperly formated as 1-01-01T00:00:00Z



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-490) dismax should autoescape + and - followed by whitespace (maybe?)

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-490.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 dismax should autoescape + and - followed by whitespace (maybe?)
 

 Key: SOLR-490
 URL: https://issues.apache.org/jira/browse/SOLR-490
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 1.1.0, 1.2, 1.3
Reporter: Hoss Man

 As discussed in this thread...
 Date: Tue, 26 Feb 2008 04:13:54 -0500
 From: Kevin Xiao
 To: solr-user
 Subject: solr to handle special charater
 ...the docs for dismax said that + or - followed by *nonwhitespace* 
 characters had special meaning ... for some reason i thought the 
 dismaxhandler had code that would look for things like xyz - abc and 
 autoescape it to xyz \- abc (after calling partialEscape) so that the +/- 
 would only be special if the were treu prefix operators.
 apparently this never actually existed.
 we should figure out if that's how it *should* work, and if so implement it.
 this would also be a good time to make the autoescaping behavior of dismax 
 more configurable, or at least more overridable by subclasses (it's currently 
 handled by a static method call)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1591) XMLWriter#writeAttr silently ignores null attribute values

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1591.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 XMLWriter#writeAttr silently ignores null attribute values
 --

 Key: SOLR-1591
 URL: https://issues.apache.org/jira/browse/SOLR-1591
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Affects Versions: 1.1.0
 Environment: My local MacBook pro laptop.
Reporter: Chris A. Mattmann
Priority: Minor
 Attachments: SOLR-1591.Mattmann.112209.patch.txt


 XMLWriter#writeAttr checks for val == null, and if so, does nothing. Instead 
 of doing nothing, it could leverage its method signature, and throw an 
 IOException declaring that the value provided is null. Patch, attached.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1515) Improved(?) Swedish snowball stemmer

2013-11-30 Thread Karl Wettin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835704#comment-13835704
 ] 

Karl Wettin commented on LUCENE-1515:
-

This is actually a rather nice stemmer if you ask me. It's semi-expensive due 
to the rule-set but does a much better job compared to the standard Swedish 
Snowball stemmer and I've successfully used it in several projects.

 Improved(?) Swedish snowball stemmer
 

 Key: LUCENE-1515
 URL: https://issues.apache.org/jira/browse/LUCENE-1515
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Affects Versions: 2.4
Reporter: Karl Wettin
 Attachments: LUCENE-1515.txt


 Snowball stemmer for Swedish lacks support for '-an' and '-ans' related 
 suffix stripping, ending up with non compatible stems for example klocka, 
 klockor, klockornas, klockAN, klockANS.  Complete list of new suffix 
 stripping rules:
 {pre}
 'an' 'anen' 'anens' 'anare' 'aner' 'anerna' 'anernas'
 'ans' 'ansen' 'ansens' 'anser' 'ansera'  'anserar' 'anserna' 
 'ansernas'
 'iera'
 (delete)
 {pre}
 The problem is all the exceptions (e.g. svans|svan, finans|fin, nyans|ny) and 
 this is an attempt at solving that problem. The rules and exceptions are 
 based on the [SAOL|http://en.wikipedia.org/wiki/Svenska_Akademiens_Ordlista] 
 entries suffixed with 'an' and 'ans'. There a few known problematic stemming 
 rules but seems to work quite a bit better than the current SwedishStemmer. 
 It would not be a bad idea to check all of SAOL entries in order to make sure 
 the integrity of the rules.
 My Snowball syntax skills are rather limited so I'm certain the code could be 
 optimized quite a bit.
 *The code is released under BSD and not ASL*. I've been posting a bit in the 
 Snowball forum and privatly to Martin Porter himself but never got any 
 response so now I post it here instead in hope for some momentum.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-668) Snapcleaner removes newest snapshots in Solaris

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-668.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Snapcleaner removes newest snapshots in Solaris
 ---

 Key: SOLR-668
 URL: https://issues.apache.org/jira/browse/SOLR-668
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2
 Environment: Solaris 10
Reporter: Gabriel Hernandez
Priority: Minor

 When running the snapcleaner script from cron with the -N option, the script 
 is removing the newest snapshots instead of the oldest snapshots.  I tweaked 
 and validated this can be corrected by making the following change in the 
 snapcleaner script:
 elif [[ -n ${num} ]]
   then
   logMessage cleaning up all snapshots except for the most recent 
 ${num} ones
   unset snapshots count
 - snapshots=`ls -cd ${data_dir}/snapshot.* 2/dev/null`
 + snapshots=`ls -crd ${data_dir}/snapshot.* 2/dev/null` 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-504) dismax with pf missing or blank causes empty boolean clause to be added to main query

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-504.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 dismax with pf missing or blank causes empty boolean clause to be added to 
 main query
 -

 Key: SOLR-504
 URL: https://issues.apache.org/jira/browse/SOLR-504
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 1.2
Reporter: Hoss Man
Priority: Minor

 work around is to always use the pf param ... it's probably better anyway 
 (making it hte same as the qf is a good start)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-523) Solr Schema - version number requirements

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-523.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Solr Schema - version number requirements
 -

 Key: SOLR-523
 URL: https://issues.apache.org/jira/browse/SOLR-523
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.2
Reporter: Andrew Nagy
Assignee: Hoss Man
Priority: Minor

 When I change the version number of the solr schema from 1.0 or 1.1 to 
 something arbitrary like say 0.8.1 - solr reports a parsing error with the 
 schema - however, a version number 0.8 is accepted.  It would be nice if 
 solr reporting an invalid schema version error instead or atleast put 
 something in the log that has a bit more detail.
 You could add in a check in src/java/org/apache/solr/schema/IndexSchema.java 
 that might look like this:
 Node node = (Node) xpath.evaluate(/schema/@version, document, 
 XPathConstants.NODE);
 if (!(1.0.equals(node) || 1.1.equals(node))) {
 log.warning(invalid schema version - use 1.0 or 1.1 only);
 }
 It's quite poor to hardcode the version numbers - but I thought I should 
 include something to give you a more concrete idea of what I am talking about.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2300) snapinstaller on slave is failing

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-2300.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 snapinstaller on slave is failing
 -

 Key: SOLR-2300
 URL: https://issues.apache.org/jira/browse/SOLR-2300
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
 Environment: Linux, Jboss 5.0GA, solr 1.3.0
Reporter: sakunthala padmanabhuni

 Hi,
 We are using Solr on Mac OSX and it is working fine.  Same setup we have 
 moved to Linux.  We have master, slave setup. Every 5 minutes, index will be 
 replicated from Master to Slave and will be installed on slave.  But on Linux 
 on the slave when the snapinstaller script is called, it is failing and 
 showing below error in logs.
 /bin/rm: cannot remove 
 `/ngs/app/esearcht/Slave2index/data/index/.nfs000111030749': 
 Device or resource busy
 This error is occuring in snapinstaller script at below lines.
   cp -lr ${name}/ ${data_dir}/index.tmp$$  \
   /bin/rm -rf ${data_dir}/index  \
   mv -f ${data_dir}/index.tmp$$ ${data_dir}/index
 It is not able to remove the index folder. So the index.tmp files are keep 
 growing in the data directory.
 Our data directory is /ngs/app/esearcht/Slave2index/data.  When  checked 
 with ls -al in the index directory, there are some .nfs files still there, 
 which are not letting index directory to be deleted.  And these .nfs files 
 are still being used by SOLR in jboss.
 This setup is giving issue only in linux.  Is this known bug on linux?  



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1666) SolrParams conversion to NamedList and back to SolrParams misses the Arrays with more than one value

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1666.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 SolrParams conversion to NamedList and back to SolrParams misses the Arrays 
 with more than one value
 

 Key: SOLR-1666
 URL: https://issues.apache.org/jira/browse/SOLR-1666
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.3, 1.4
Reporter: Nestor Oviedo
Assignee: Hoss Man
Priority: Minor
 Attachments: SOLR-1666.patch


 When a parameter in a SolrParams instance is an Array that has more than one 
 element, the method SolrParams.toNamedList() generates a NamedListObject 
 correctly, but when the method SolrParams.toSolrParams() is invoked with that 
 NamedList instance, the resultant SolrParams instance has that parameter as a 
 String, wich is the result of the String[].toString() method.
 TestCase:
 {code}
 public class TestDismaxQParserPlugin extends DisMaxQParserPlugin {
   private Log log = LogFactory.getLog(this.getClass());
   public QParser createParser(String qstr, SolrParams localParams, 
 SolrParams params, SolrQueryRequest req) {
   // TestCase with the param facet.field
   if(params.getParams(FacetParams.FACET_FIELD) != null) {
   // Original Values
   log.debug(FACET.FIELD Param - Before);
   String[] facetFieldBefore = 
 params.getParams(FacetParams.FACET_FIELD);
   log.debug(toString():+facetFieldBefore.toString());
   log.debug(length:+facetFieldBefore.length);
   log.debug(Elements:);
   for(String value : facetFieldBefore) 
   log.debug([class 
 +value.getClass().getName()+] +value);
   
   // Transforming
   NamedListObject paramsList = params.toNamedList();
   params = SolrParams.toSolrParams(paramsList);
   // Result Values
   log.debug(FACET.FIELD Param - After);
   String[] facetFieldAfter = 
 params.getParams(FacetParams.FACET_FIELD);
   log.debug(toString():+facetFieldAfter.toString());
   log.debug(length:+facetFieldAfter.length);
   log.debug(Elements:);
   for(String value : facetFieldAfter) 
   log.debug([class 
 +value.getClass().getName()+] +value);
   } else {
   log.debug(FACET.FIELD NOT SPECIFIED);
   }
   return super.createParser(qstr, localParams, params, req);
   }
 }
 {code}
 Editing the solrconfig.xml file for this QueryParser to be used and using an 
 URL like 
 http://host:port/path/select?q=somethingfacet=truefacet.field=subjectfacet.field=date;
  the output is (only the interesting lines):
 FINA: FACET.FIELD Param - Before
 FINA: toString():[Ljava.lang.String;@c96ad7c
 FINA: length:2
 FINA: Elements:
 FINA: [class java.lang.String] subject
 FINA: [class java.lang.String] date
 FINA: FACET.FIELD Param - After
 FINA: toString():[Ljava.lang.String;@44775121
 FINA: length:1
 FINA: Elements:
 FINA: [class java.lang.String] [Ljava.lang.String;@c96ad7c



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-678) HTMLStripStandardTokenizerFactory doesn't interpret word boundaries on html tags correctly.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-678.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 HTMLStripStandardTokenizerFactory doesn't interpret word boundaries on html 
 tags correctly.
 ---

 Key: SOLR-678
 URL: https://issues.apache.org/jira/browse/SOLR-678
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.2
 Environment: Mac OS X 10.5.4, java version 1.5.0_13
Reporter: Matt Connolly
   Original Estimate: 4h
  Remaining Estimate: 4h

 The HTMLStripStandardTokenizerFactory filter does not place word boundaries 
 on HTML tags like it should.
 For example, indexing the text h2title/h2psome comment/p results in 
 two words being indexed: titlesome and comment when there should be three 
 words: title some and comment.
 Not all tags need this, for example, it may be perfectly reasonable to write 
 bsub/bscript to be indexed as subscript since the b is interpretted 
 as inline, not block.
 I would suggest all block or paragraph tags be translated into spaces so that 
 text on either side of the tag is considered separate tokens. eg: p div h1 h2 
 h3 h4 h5 h6 br hr pre   (etc)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1196) Incorrect matches when using non alphanumeric search string !@#$%\^\\*\(\)

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1196.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Incorrect matches when using non alphanumeric search string !@#$%\^\\*\(\)
 ---

 Key: SOLR-1196
 URL: https://issues.apache.org/jira/browse/SOLR-1196
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: Solr 1.3/ Java 1.6/ Win XP/Eclipse 3.3
Reporter: Sam Michael

 When matching strings that do not include alphanumeric chars, all the data is 
 returned as matches. (There is actually no match, so nothing should be 
 returned.)
 When I run a query like  - (activity_type:NAME) AND title:(\!@#$%\^\*\(\)) 
 all the documents are returned even though there is not a single match. There 
 is no title that matches the string (which has been escaped).
 My document structure is as follows
 doc
 str name=activity_typeNAME/str
 str name=titleBathing/str
 
 /doc 
 The title field is of type text_title which is described below. 
 fieldType name=text_title class=solr.TextField 
 positionIncrementGap=100
   analyzer type=index
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
 generateNumberParts=1 catenateWords=1 catenateNumbers=1 catenateAll=1 
 splitOnCaseChange=1/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.RemoveDuplicatesTokenFilterFactory/
   /analyzer
   analyzer type=query
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true/
 filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
 generateNumberParts=1 catenateWords=1 catenateNumbers=1 catenateAll=1 
 splitOnCaseChange=1/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.RemoveDuplicatesTokenFilterFactory/
   /analyzer
 /fieldType 
 -
 Yonik's analysis as follows.
 str name=rawquerystring-features:foo features:(\!@#$%\^\*\(\))/str
 str name=querystring-features:foo features:(\!@#$%\^\*\(\))/str
 str name=parsedquery-features:foo/str
 str name=parsedquery_toString-features:foo/str
 The text analysis is throwing away non alphanumeric chars (probably
 the WordDelimiterFilter).  The Lucene (and Solr) query parser throws
 away term queries when the token is zero length (after analysis).
 Solr then interprets the left over -features:foo as all documents
 not containing foo in the features field, so you get a bunch of
 matches. 
 As per his suggestion, a bug is filed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1623) Solr hangs (often throwing java.lang.OutOfMemoryError: PermGen space) when indexing many different field names

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1623.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Solr hangs (often throwing java.lang.OutOfMemoryError: PermGen space) when 
 indexing many different field names
 --

 Key: SOLR-1623
 URL: https://issues.apache.org/jira/browse/SOLR-1623
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 1.3, 1.4
 Environment: Tomcat Version JVM Version  
 JVM VendorOS Name OS VersionOS Architecture 
 Apache Tomcat/6.0   snapshot 1.6.0_13-b03 Sun Microsystems Inc. Linux 
 2.6.18-164.el5  amd64 
 and/or
 Tomcat VersionJVM Version JVM Vendor  
   OS Name   OS VersionOS Architecture 
 Apache Tomcat/6.0.18   1.6.0_12-b04Sun Microsystems Inc. Windows 
 2003 5.2   amd64 
Reporter: Laurent Chavet
Priority: Critical

 With the following fields in schema.xml:
  fields
field name=id type=sint indexed=true stored=true required=true 
 / 
 dynamicField name=weight_*  type=sintindexed=true  
 stored=true/
 /fields
 Run the following code:
 import java.util.ArrayList;
 import java.util.List;
 import org.apache.solr.client.solrj.SolrServer;
 import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
 import org.apache.solr.common.SolrInputDocument;
 public static void main(String[] args) throws Exception {
 SolrServer server;
 try {
 server = new CommonsHttpSolrServer(args[0]);
 } catch (Exception e) {
 System.err.println(can't creater server using:  + args[0] +   
  + e.getMessage());
 throw e;
 }
 for (int i = 0; i  1000; i++) {
 ListSolrInputDocument batchedDocs = new 
 ArrayListSolrInputDocument();
 for (int j = 0; j  1000; j++) {
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, i * 1000 + j);
 // hangs after 30 to 50 batches
 
 doc.addField(weight_aaa
  + Integer.toString(i) + _ + Integer.toString(j), i * 1000 + j);
 // hangs after about 200 batches
 //doc.addField(weight_ + Integer.toString(i) + _ + 
 Integer.toString(j), i * 1000 + j);
 batchedDocs.add(doc);
 }
 try {
 server.add(batchedDocs, true);
 System.err.println(Done with batch= + i);
 // server.commit(); //doesn't change anything
 } catch (Exception e) {
 System.err.println(batchId= + i +  bad batch:  + 
 e.getMessage());
 throw e;
 }
 }
 }
 And soon the client (sometime throws) and solr will freeze. sometime you can 
 see: java.lang.OutOfMemoryError: PermGen space in the server logs



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-937) Highlighting problem related to stemming

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-937.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Highlighting problem related to stemming
 

 Key: SOLR-937
 URL: https://issues.apache.org/jira/browse/SOLR-937
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 1.3
Reporter: David Bowen

 Using the example data (as in ant run-example) from the latest dev version, 
 add the words electronics and connector to the features field of the 
 first doc in ipod_other.xml.  Now the following query:
 http://localhost:8983/solr/select/?q=electronicshl=truehl.fl=features+cat
 will show electronics highlighted in the features field but not in the cat 
 field.  If you search instead for connector, it is highlighted in both.
 This seems like a bug to me.  A possible but not entirely satisfactory 
 work-around would be to have the cat field copied into another field which is 
 stemmed, and use that other field for highlighting (assuming the search is on 
 the default search field, and not on cat).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2235) java.io.IOException: The specified network name is no longer available

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-2235.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 java.io.IOException: The specified network name is no longer available 
 ---

 Key: SOLR-2235
 URL: https://issues.apache.org/jira/browse/SOLR-2235
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3, 1.4, 1.4.1
Reporter: Reshma

 Using Solr 1.4 hosted with Tomcat 6 on Windows 2003
 Search becomes unavailable at times. At the time of failure, solr admin page 
 will be loading. But when we make search query we are getting the following 
 error
 
 HTTP Status 500 - The specified network name is no longer available 
 java.io.IOException: The specified network name is no longer 
 available at java.io.RandomAccessFile.readBytes(Native Method) at 
 java.io.RandomAccessFile.read(Unknown Source) at 
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.readInternal(SimpleFSDirectory.java:132)
  at 
 org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:157)
  at 
 org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
  at org.apache.lucene.store.IndexInput.readVInt
 (IndexInput.java:80) at 
 org.apache.lucene.index.TermBuffer.read(TermBuffer.java:64) at 
 org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:129) at 
 org.apache.lucene.index.SegmentTermEnum.scanTo
 (SegmentTermEnum.java:160) at 
 org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:211) at 
 org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:179) at 
 org.apache.lucene.index.SegmentReader.docFreq
 (SegmentReader.java:975) at 
 org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:627) at 
 org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:308) at 
 org.apache.lucene.search.IndexSearcher.docFreq
 (IndexSearcher.java:147) at 
 org.apache.lucene.search.Similarity.idfExplain(Similarity.java:765) at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:46) at 
 org.apache.lucene.search.TermQuery.createWeight
 (TermQuery.java:146) at org.apache.lucene.search.Query.weight(Query.java:99) 
 at org.apache.lucene.search.Searcher.createWeight
 (Searcher.java:230) at 
 org.apache.lucene.search.Searcher.search(Searcher.java:171) at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1044)
  at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:940)
  at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:344) 
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:178)
  at 
 org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:118)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
  at org.apache.solr.core.SolrCore.execute
 (SolrCore.java:1316) at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:336)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:239)
  at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
  at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
  at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
  at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) 
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) 
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
  at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) 
 at org.apache.coyote.http11.Http11AprProcessor.process
 (Http11AprProcessor.java:857) at 
 org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process
 (Http11AprProtocol.java:565) at 
 org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1509) at 
 java.lang.Thread.run
 (Unknown Source) 
 ===
 The error stops when we restart Tomcat.  We are using a file server to store 
 the actual index files, which are not on the same machine as Solr/Tomcat. We 
 have checked and confirmed with the network team that there was no issue. Can 
 some one help us to fix the issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Resolved] (SOLR-1238) exception in solrJ when authentication is used

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1238.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 exception in solrJ when authentication is used
 --

 Key: SOLR-1238
 URL: https://issues.apache.org/jira/browse/SOLR-1238
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 1.3
Reporter: Noble Paul
Priority: Minor
 Attachments: SOLR-1238.patch


 see the thread http://markmail.org/thread/w36ih2fnphbubian
 {code}
 I am facing getting error when I am using Authentication in Solr. I
 followed Wiki. The error doesnot appear when I searching. Below is the
 code snippet and the error.
 Please note I am using Solr 1.4 Development build from SVN.
HttpClient client=new HttpClient();
AuthScope scope = new 
 AuthScope(AuthScope.ANY_HOST,AuthScope.ANY_PORT,null, null);
client.getState().setCredentials(scope,new 
 UsernamePasswordCredentials(guest, guest));
SolrServer server =new 
 CommonsHttpSolrServer(http://localhost:8983/solr,client);
SolrInputDocument doc1=new SolrInputDocument();
//Add fields to the document
doc1.addField(employeeid, 1237);
doc1.addField(employeename, Ann);
doc1.addField(employeeunit, etc);
doc1.addField(employeedoj, 1995-11-31T23:59:59Z);
server.add(doc1);
 Exception in thread main
 org.apache.solr.client.solrj.SolrServerException:
 org.apache.commons.httpclient.ProtocolException: Unbuffered entity
 enclosing request can not be repeated.
at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:468)
at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:242)
at 
 org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:259)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:63)
at test.SolrAuthenticationTest.init(SolrAuthenticationTest.java:49)
at test.SolrAuthenticationTest.main(SolrAuthenticationTest.java:113)
 Caused by: org.apache.commons.httpclient.ProtocolException: Unbuffered
 entity enclosing request can not be repeated.
at 
 org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:487)
at 
 org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at 
 org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at 
 org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at 
 org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at 
 org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at 
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:415)
... 5 more.
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-723) SolrCore aliasing/swapping may lead to confusing JMX

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-723.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 SolrCore  aliasing/swapping may lead to confusing JMX
 --

 Key: SOLR-723
 URL: https://issues.apache.org/jira/browse/SOLR-723
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Henri Biestro
Priority: Minor
 Attachments: SOLR-723-solr-core-swap-JMX-issues-lucene_3x.patch


 As mentioned by Yonik in SOLR-647, JMX registers the core with its name.
 After swapping or re-aliasing the core, the JMX tracking name does not 
 correspond to the actual core anymore.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-534) Return all query results with parameter rows=-1

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-534.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Return all query results with parameter rows=-1
 ---

 Key: SOLR-534
 URL: https://issues.apache.org/jira/browse/SOLR-534
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
 Environment: Tomcat 5.5
Reporter: Lars Kotthoff
Priority: Minor
 Attachments: solr-all-results.patch


 The searcher should return all results matching a query when the parameter 
 rows=-1 is given.
 I know that it is a bad idea to do this in general, but as it explicitly 
 requires a special parameter, people using this feature will be aware of what 
 they are doing. The main use case for this feature is probably debugging, but 
 in some cases one might actually need to retrieve all results because they 
 e.g. are to be merged with results from different sources.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-112) Hierarchical Handler Config

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-112.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Hierarchical Handler Config
 ---

 Key: SOLR-112
 URL: https://issues.apache.org/jira/browse/SOLR-112
 Project: Solr
  Issue Type: Improvement
  Components: update
Affects Versions: 1.3
Reporter: Ryan McKinley
Priority: Minor
 Attachments: SOLR-112.patch


 From J.J. Larrea on SOLR-104
 2. What would make this even more powerful would be the ability to subclass 
 (meaning refine and/or extend) request handler configs: If the requestHandler 
 element allowed an attribute extends=another-requesthandler-name and 
 chained the SolrParams, then one could do something like:
   requestHandler name=search/products/all 
 class=solr.DisMaxRequestHandler 
 lst name=defaults
  float name=tie0.01/float
  str name=qf
 text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  /str
  ... much more, per the dismax example in the sample solrconfig.xml ...
   /requestHandler
   ... and replacing the partitioned example ...
   requestHandler name=search/products/instock 
 extends=search/products/all 
 lst name=appends
   str name=fqinStock:true/str
 /lst
   /requestHandler



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1262) DIH needs support for callable statements

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1262.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 DIH needs support for callable statements 
 --

 Key: SOLR-1262
 URL: https://issues.apache.org/jira/browse/SOLR-1262
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.3
 Environment: linux
 mysql
Reporter: Abdul Chaudhry
Assignee: Noble Paul
Priority: Minor

 During an indexing run we noticed that we were spending a lot of time 
 creating and tearing down queries in mysql
 The queries we are using are complex and involve joins spanning across 
 multiple tables.
 We should support prepared statements in the data import handler via the 
 data-config.xml file - for those databases that support prepared statements.
 We could add a new attribute to the entity entity in dataConfig - say - 
 pquery or preparedQuery and then pass the prepared statement and have values 
 filled in by the actual queries for each row using a placeholder - like a ? 
 or something else.
 I would probably start by hacking class JdbcDataSource to try a test but was 
 wondering if anyone had experienced this or had any suggestions or if there 
 is something in the works that I missed - I couldn't find any other bugs 
 mentioning using prepared statements for performance.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-731) CoreDescriptor.getCoreContainer should not be public

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-731.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 CoreDescriptor.getCoreContainer should not be public
 

 Key: SOLR-731
 URL: https://issues.apache.org/jira/browse/SOLR-731
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Henri Biestro
 Attachments: solr-731.patch


 For the very same reasons that CoreDescriptor.getCoreProperties did not need 
 to be public (aka SOLR-724)
 It also means the CoreDescriptor ctor should not need a CoreContainer
 The CoreDescriptor is only meant to be describing a to-be created SolrCore.
 However, we need access to the CoreContainer from the SolrCore now that we 
 are guaranteed the CoreContainer always exists.
 This is also a natural consequence of SOLR-647 now that the CoreContainer is 
 not a map of CoreDescriptor but a map of SolrCore.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-732) Collation bug

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-732.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Collation bug
 -

 Key: SOLR-732
 URL: https://issues.apache.org/jira/browse/SOLR-732
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 1.3
Reporter: Matthew Runo
Priority: Minor

 Search term: Quicksilver... I get two suggestions...
 lst name=suggestion
 int name=frequency2/int
 str name=wordQuicksilver/str
 /lst
 lst name=suggestion
 int name=frequency220/int
 str name=wordQuiksilver/str
 /lst
 ...and it's not correctly spelled...
 bool name=correctlySpelledfalse/bool
 ...but the collation is of the first term - not the one with the highest 
 frequency?
 str name=collationQuicksilver/str
 Other collations, for example, 'runnning' come up with more than one 
 suggestion (cunning, running) but properly pick the 'best bet' based on 
 frequency. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-725) CoreContainer/CoreDescriptor/SolrCore cleansing

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-725.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 CoreContainer/CoreDescriptor/SolrCore cleansing
 ---

 Key: SOLR-725
 URL: https://issues.apache.org/jira/browse/SOLR-725
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
Reporter: Henri Biestro
 Attachments: solr-725.patch, solr-725.patch, solr-725.patch, 
 solr-725.patch


 These 3 classes and the name vs alias handling are somewhat confusing.
 The recent SOLR-647  SOLR-716 have created a bit of a flux.
 This issue attemps to clarify the model and the list of operations. 
 h3. CoreDescriptor: describes the parameters of a SolrCore
 h4. Definitions
 * has one name
   ** The CoreDescriptor name may represent multiple aliases; in that 
 case, first alias is the SolrCore name
 * has one instance directory location
 * has one config  schema name
 h4. Operations
 The class is only a parameter passing facility
 h3. SolrCore: manages a Lucene index
 h4. Definitions
 * has one unique *name* (in the CoreContainer)
 **the *name* is used in JMX to identify the core
 * has one current set of *aliases*
 **the name is the first alias
 h4. Name  alias operations
 * *get name/aliases*: obvious
 * *alias*: adds an alias to this SolrCore
 * *unalias*: removes an alias from this SolrCore
 * *name*: sets the SolrCore name
 **potentially impacts JMX registration
 * *rename*: picks a new name from the SolrCore aliases
 **triggered when alias name is already in use
 h3. CoreContainer: manages all relations between cores  descriptors
 h4. Definitions
 * has a set of aliases (each of them pointing to one core)
 **ensure alias uniqueness.
 h4. SolrCore instance operations
 * *load*: makes a SolrCore available for requests
 **creates a SolrCore
 **registers all SolrCore aliases in the aliases set
 **(load = create + register)
 * *unload*: removes a core idenitified by one of its aliases
 **stops handling the Lucene index
 **all SolrCore aliases are removed
 * *reload*: recreate the core identified by one of its aliases
 * *create*: create a core from a CoreDescriptor
 **readies up the Lucene index
 * *register*: registers all aliases of a SolrCore
   
 h4. SolrCore  alias operations
 * *swap*: swaps 2 aliases
 **method: swap
 * *alias*: creates 1 alias for a core, potentially unaliasing a 
 previously used alias
 **The SolrCore name being an alias, this operation might trigger 
 a SolrCore rename
 * *unalias*: removes 1 alias for a core
 **The SolrCore name being an alias, this operation might trigger 
 a SolrCore rename
 *  *rename*: renames a core
 h3. CoreAdminHandler: handles CoreContainer operations
 * *load*/*create*:  CoreContainer load
 * *unload*:  CoreContainer unload
 * *reload*: CoreContainer reload
 * *swap*:  CoreContainer swap
 * *alias*:  CoreContainer alias
 * *unalias*: CoreContainer unalias
 *  *rename*: CoreContainer rename
 * *persist*: CoreContainer persist, writes the solr.xml
 **stauts*: returns the status of all/one SolrCore



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-456) Ability to choose another analyzer for field

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-456.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Ability to choose another analyzer for field
 

 Key: SOLR-456
 URL: https://issues.apache.org/jira/browse/SOLR-456
 Project: Solr
  Issue Type: Wish
  Components: highlighter
Affects Versions: 1.3
Reporter: Sergey Dryganets
 Attachments: OverrideFieldAnalyzer.patch


 To add new search options for example add case-sensitivity and not 
 case-sensitivity search
 we need to index same field twice
 for example create field with 2 search options
 We should create 3 fields
 1. case-sensitive cs_text index only field
 not case-sensitive ncs_text index only field
 and storage only field text
 So to properly highlight search by index we should use analyzer from another 
 field (we send hl.fl =text  but search by cs_text or ncs_text) 
 It's possible to add parameter to per field override highlighter analyzer?
 ie I want to send parameter
 f.fieldName.hl.fieldOverride=anotherFieldName
 or for example above:
 f.text.hl.fieldOverride=ncs_text



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1917) Possible Null Pointer Exception in highlight or debug component

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1917.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Possible Null Pointer Exception in highlight or debug component
 ---

 Key: SOLR-1917
 URL: https://issues.apache.org/jira/browse/SOLR-1917
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 1.4
Reporter: David Bowen
 Attachments: SOLR-1917.patch


 This bug may only show up if you have the patch for SOLR-1143 installed, but 
 it should be fixed in any case since the existing logic is wrong.  It is 
 explicitly looking for the nulls that can cause the exception, but only after 
 the exception would have already happened.
 What happens is that there is an array of Map.Entry objects which is 
 converted into a SimpleOrderedMap, and then there is a method that iterates 
 over the SimpleOrderedMap looking for null names.  That's wrong because it is 
 the array elements themselves which can be null, so constructing the 
 SimpleOrderedMap throws an NPE.
 I will attach a patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2832) SolrException: Internal Server Error occurs when optimize index files

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-2832.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 SolrException: Internal Server Error occurs when optimize index files
 -

 Key: SOLR-2832
 URL: https://issues.apache.org/jira/browse/SOLR-2832
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
 Environment: CentOS 5.6
 Tomcat 6.0.29
 Java 1.6
Reporter: Kobayashi

 SolrException: Internal Server Error occurs when optimize index files
 When I call optimize() using SolrJ I receive the following error:
 org.apache.solr.common.SolrException: Internal Server Error
 Internal Server Error
 request: http://xxx.xxx.xxx.xxx/solr/update
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:435)
 org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
 org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
 org.apache.solr.client.solrj.SolrServer.optimize(SolrServer.java:94)
 org.apache.solr.client.solrj.SolrServer.optimize(SolrServer.java:82)
 Here is my situation:
 - After committing all of my documents, there are 2 segments files and 385 
 index files of total 5.4GB  
 - Calling optimize(), with no parameters. 
 - About 5 minutes later, the Solr exception occurs.
 - The index files are seem to be merged into 11 files. 
 - The index is searchable with no stress.
 - There are no error logs out in catalina when the SolrException occurs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1912) Replication handler should offer more useful status messages, especially during fsync/commit/etc.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1912.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Replication handler should offer more useful status messages, especially 
 during fsync/commit/etc.
 -

 Key: SOLR-1912
 URL: https://issues.apache.org/jira/browse/SOLR-1912
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 1.4
Reporter: Chris Harris
 Attachments: SOLR-1912.patch


 If you go to the replication admin page 
 (http://server:port/solr/core/admin/replication/index.jsp) while replication 
 is in progress, then you'll see a Current Replication Status section, which 
 indicates how far along the replication download is, both overall and for the 
 current file. It's great to see this status info. However, the replication 
 admin page becomes misleading once the last file has been downloaded. In 
 particular, after all downloads are complete Solr 1.4 continues to display 
 things like this:
 {quote}
 Downloading File: _wv_1.del, Downloaded: 44 bytes / 44 bytes [100.0%] 
 {quote}
 until all the index copying, fsync-ing, committing, and so on are complete. 
 It gives the disconcerting impression that data transfer between master and 
 slaves has mysteriously stalled right at the end of a 44 byte download. In 
 case this is weird, let me mention that after a full replication I did just 
 now, Solr spent quite a while in SnapPuller.terminateAndWaitFsyncService(), 
 somewhere between many seconds and maybe 5 minutes.
 I propose that the admin page should offer more useful status messages while 
 fsync/etc. are going on. I offer an initial patch that does this. SnapPuller 
 is modified to always offer a human readable indication of the current 
 operation, and this is displayed on the replication page. We also stop 
 showing progress indication for the current file, except when there 
 actually is a file currently being downloaded.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-629) Fuzzy search with DisMax request handler

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-629.
-

Resolution: Won't Fix

2013 Old JIRA cleanup

 Fuzzy search with DisMax request handler
 

 Key: SOLR-629
 URL: https://issues.apache.org/jira/browse/SOLR-629
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 1.3
Reporter: Guillaume Smet
Priority: Minor
 Attachments: dismax_fuzzy_query_field.v0.1.diff, 
 dismax_fuzzy_query_field.v0.1.diff


 The DisMax search handler doesn't support fuzzy queries which would be quite 
 useful for our usage of Solr and from what I've seen on the list, it's 
 something several people would like to have.
 Following this discussion 
 http://markmail.org/message/tx6kqr7ga6ponefa#query:solr%20dismax%20fuzzy+page:1+mid:c4pciq6rlr4dwtgm+state:results
  , I added the ability to add fuzzy query field in the qf parameter. I kept 
 the patch as conservative as possible.
 The syntax supported is: fieldOne^2.3 fieldTwo~0.3 fieldThree~0.2^-0.4 
 fieldFour as discussed in the above thread.
 The recursive query aliasing should work even with fuzzy query fields using a 
 very simple rule: the aliased fields inherit the minSimilarity of their 
 parent, combined with their own one if they have one.
 Only the qf parameter support this syntax atm. I suppose we should make it 
 usable in pf too. Any opinion?
 Comments are very welcome, I'll spend the time needed to put this patch in 
 good shape.
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1880) Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY stage gets all fields

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1880.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY 
 stage gets all fields
 ---

 Key: SOLR-1880
 URL: https://issues.apache.org/jira/browse/SOLR-1880
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Shawn Smith
 Attachments: ASF.LICENSE.NOT.GRANTED--one-pass-query-v1.4.0.patch, 
 ASF.LICENSE.NOT.GRANTED--one-pass-query.patch


 Right now, a typical distributed search using QueryComponent makes two HTTP 
 requests to each shard:
 # STAGE_EXECUTE_QUERY executes one HTTP request to each shard to get top N 
 ids and sort keys, merges the results to produce a final list of document IDs 
 (PURPOSE_GET_TOP_IDS).
 # STAGE_GET_FIELDS executes a second HTTP request to each shard to get the 
 document field values for the final list of document IDs (PURPOSE_GET_FIELDS).
 If the fl param is just id or just id,score, all document data to 
 return is already fetched by STAGE_EXECUTE_QUERY.  The second 
 STAGE_GET_FIELDS query is completely unnecessary.  Eliminating that 2nd HTTP 
 request can make a big difference in overall performance.
 Also, the fl param only gets id, score and sort columns, it would probably 
 be cheaper to fetch the final sort column data in STAGE_EXECUTE_QUERY which 
 has to read the sort column data anyway, and skip STAGE_GET_FIELDS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2087) Dismax handler not handling +/- correctly

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-2087.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Dismax handler not handling +/- correctly
 -

 Key: SOLR-2087
 URL: https://issues.apache.org/jira/browse/SOLR-2087
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 1.4
Reporter: Gabriel Weinberg

 If I do a query like: i'm a walking contradiction it matches pf as 
 text:i'm_a a_walking walking contradiction^2.0, and it matches fine.
 If I do a query like: i'm a +walking contradiction it matches pf as 
 text:i'm_a a_+walking +walking contradiction^2.0 and doesn't match at all.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1903) Null pointer exception when you remove and old date field type and old data still has field

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1903.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Null pointer exception when you remove and old date field type and old data 
 still has field
 ---

 Key: SOLR-1903
 URL: https://issues.apache.org/jira/browse/SOLR-1903
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.4
 Environment: linux, windows server and windows 7
Reporter: Stephen Schwenker
 Attachments: removedatefieldexception.patch


 We removed an old date field from our date search because it's not needed 
 anymore.  After which we started seeing these errors with old data in the 
 index that contained the field.  there are too many records to re-index right 
 now so I created this patch.  Also, here is the exception that was thrown.
 null java.lang.NullPointerException at 
 org.apache.solr.request.XMLWriter.writePrim(XMLWriter.java:761) at 
 org.apache.solr.request.XMLWriter.writeStr(XMLWriter.java:619) at 
 org.apache.solr.schema.TextField.write(TextField.java:45) at 
 org.apache.solr.schema.SchemaField.write(SchemaField.java:108) at 
 org.apache.solr.request.XMLWriter.writeDoc(XMLWriter.java:311) at 
 org.apache.solr.request.XMLWriter$3.writeDocs(XMLWriter.java:483) at 
 org.apache.solr.request.XMLWriter.writeDocuments(XMLWriter.java:420) at 
 org.apache.solr.request.XMLWriter.writeDocList(XMLWriter.java:457) at 
 org.apache.solr.request.XMLWriter.writeVal(XMLWriter.java:520) at 
 org.apache.solr.request.XMLWriter.writeResponse(XMLWriter.java:130) at 
 org.apache.solr.request.XMLResponseWriter.write(XMLResponseWriter.java:34) at 
 org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:325)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
  at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
  at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
  at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
  at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) 
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) 
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
  at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) 
 at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) 
 at 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
  at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) 
 at java.lang.Thread.run(Unknown Source) 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1910) Add hl.df (highlight-specific default field) param, so highlighting can have a separate analysis path from search

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1910.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Add hl.df (highlight-specific default field) param, so highlighting can have 
 a separate analysis path from search
 -

 Key: SOLR-1910
 URL: https://issues.apache.org/jira/browse/SOLR-1910
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Affects Versions: 1.4
Reporter: Chris Harris
 Attachments: SOLR-1910.patch


 Summary: Patch adds a hl.df parameter, to help with (some) situations where 
 the highlighter currently uses the wrong analyzer for highlighting.
 What: hl.df is like the normal df parameter, except that it takes effect only 
 during highlighting. (In fact the implementation is basically to temporarily 
 mess with the normal df parameter at the start of highlighting, and then  
 revert to the original value when highlighting is complete.) When hl.df is 
 specified, we make sure not to use the Query object that was parsed by 
 QueryComponent, but rather make our own. In the right circumstances anyway, 
 this means that a more appropriate analyzer gets used for highlighting.
 Motivation: Currently, in a normal query+highlighting request, the 
 highlighter re-uses the Query object parsed by the QueryComponent. This can 
 result in incorrect highlights if the field being highlighted is of a 
 different type than the field being queried. In my particular case:
  * My queries don't explicitly specify field names; they always rely on the 
 default field
  * My default field for search is body
  * body is a unigram-plus-bigram field. So, e.g. input audit trail gets 
 turned into tokens audit / audit trail / trail. (This is a performance 
 optimzation.)
  * If I try to highlight directly on body, the highlights get screwed up. 
 (This is because the highlighter doesn't really support the kind of 
 continuously overlapping tokens generated by my analysis chain. In short, 
 the bigrams confuse the TokenGroup class.)
  * To avoid these highlighting problems, I don't directly highlight body, 
 but rather a highlight field, which has no bigram tokens. (highlight is 
 populated from body with a copyfield directive.)
  * Without hl.df, I have a new class of highlighting problems. In particular, 
 if the user enters a phrase search (e.g. audit trail), then that phrase 
 appears unhighlighted in the highlighter output. The short version for why is 
 that the analyzer used to parse the query output a Query object that contains 
 bigrams, but the text that we're highlighting doesn't contain bigrams.
  * With hl.df, the analyzers match up for highlight; the Query object used 
 for highlighting does _not_ contain bigrams, just like the highlight field.
 (I realize it may help to expand the description of this use case, but I'm a 
 bit hurried right now.)
 I wanted to throw this out there, partly in case people have any better 
 solutions. One variation on hl.df option that might be worth considering is 
 hl.UseHighlightedFieldAsDefaultField, which would create a new Query object 
 not just once at the start of highlighting, but separately for each 
 particular field that's getting highlighted.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1872) Document-level Access Control in Solr

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1872.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Document-level Access Control in Solr
 -

 Key: SOLR-1872
 URL: https://issues.apache.org/jira/browse/SOLR-1872
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 1.4
 Environment: Solr 1.4
Reporter: Peter Sturge
Priority: Minor
  Labels: access, control
 Attachments: ASF.LICENSE.NOT.GRANTED--SolrACLSecurity.java, 
 ASF.LICENSE.NOT.GRANTED--SolrACLSecurity.java, SolrACLSecurity.rar


 This issue relates to providing document-level access control for Solr index 
 data.
 A related JIRA issue is: SOLR-1834. I thought it would be best if I created a 
 separate JIRA issue, rather than tack on to SOLR-1834, as the approach here 
 is somewhat different, and I didn't want to confuse things or step on Anders' 
 good work.
 There have been lots of discussions about document-level access in Solr using 
 LCF, custom comoponents and the like. Access Control is one of those subjects 
 that quickly spreads to lots of 'ratholes' to dive into. Even if not everyone 
 agrees with the approaches taken here, it does, at the very least, highlight 
 some of the salient issues surrounding access control in Solr, and will 
 hopefully initiate a healthy discussion on the range of related requirements, 
 with the aim of finding the optimum balance of requirements.
 The approach taken here is document and schema agnostic - i.e. the access 
 control is independant of what is or will be in the index, and no schema 
 changes are required. This version doesn't include LDAP/AD integration, but 
 could be added relatively easily (see Ander's very good work on this in 
 SOLR-1834). Note that, at the moment, this version doesn't deal with /update, 
 /replication etc., it's currently a /select thing at the moment (but it could 
 be used for these).
 This approach uses a SearchComponent subclass called SolrACLSecurity. Its 
 configuration is read in from solrconfig.xml in the usual way, and the 
 allow/deny configuration is split out into a config file called acl.xml.
 acl.xml defines a number of users and groups (and 1 global for 'everyone'), 
 and assigns 0 or more {{acl-allow}} and/or {{acl-deny}} elements.
 When the SearchComponent is initialized, user objects are created and cached, 
 including an 'allow' list and a 'deny' list.
 When a request comes in, these lists are used to build filter queries 
 ('allows' are OR'ed and 'denies' are NAND'ed), and then added to the query 
 request.
 Because the allow and deny elements are simply subsearch queries (e.g. 
 {{acl-allowsomefield:secret/acl-allow}}, this mechanism will work on any 
 stored data that can be queried, including already existing data.
 Authentication
 One of the sticky problems with access control is how to determine who's 
 asking for data. There are many approaches, and to stay in the generic vein 
 the current mechanism uses http parameters for this.
 For an initial search, a client includes a {{username=somename}} parameter 
 and a {{hash=pwdhash}} hash of its password. If the request sends the correct 
 parameters, the search is granted and a uuid parameter is returned in the 
 response header. This uuid can then be used in subsequent requests from the 
 client. If the request is wrong, the SearchComponent fails and will increment 
 the user's failed login count (if a valid user was specified). If this count 
 exceeds the configured lockoutThreshold, no further requests are granted 
 until the lockoutTime has elapsed.
 This mechanism protects against some types of attacks (e.g. CLRF, dictionary 
 etc.), but it really needs container HTTPS as well (as would most other auth 
 implementations). Incorporating SSL certificates for authentication and 
 making the authentication mechanism pluggable would be a nice improvement 
 (i.e. separate authentication from access control).
 Another issue is how internal searchers perform autowarming etc. The solution 
 here is to use a local key called 'SolrACLSecurityKey'. This key is local and 
 [should be] unique to that server. firstSearcher, newSearcher et al then 
 include this key in their parameters so they can perform autowarming without 
 constraint. Again, there are likely many ways to achieve this, this approach 
 is but one.
 The attached rar holds the source and associated configuration. This has been 
 tested on the 1.4 release codebase (search in the attached solrconfig.xml for 
 SolrACLSecurity to find the relevant sections in this file).
 I hope this proves helpful for people who are looking for this sort of 
 functionality in Solr, and more generally to address how such a mechanism 

[jira] [Resolved] (SOLR-1878) RelaxQueryComponent - A new SearchComponent that relaxes the main query in a semiautomatic way

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1878.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 RelaxQueryComponent - A new SearchComponent that relaxes the main query in a 
 semiautomatic way
 --

 Key: SOLR-1878
 URL: https://issues.apache.org/jira/browse/SOLR-1878
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 1.4
Reporter: Koji Sekiguchi
Priority: Minor

 I have the following use case:
 Imagine that you visit a web page for searching an apartment for rent. You 
 choose parameters, usually mark check boxes and this makes AND queries:
 {code}
 rent:[* TO 1500] AND bedroom:[2 TO *] AND floor:[100 TO *]
 {code}
 If the conditions are too tight, Solr may return few or zero leasehold 
 properties. Because the things is not good for the site visitors and also 
 owners, the owner may want to recommend the visitors to relax the conditions 
 something like:
 {code}
 rent:[* TO 1700] AND bedroom:[2 TO *] AND floor:[100 TO *]
 {code}
 or:
 {code}
 rent:[* TO 1500] AND bedroom:[2 TO *] AND floor:[90 TO *]
 {code}
 And if the relaxed query get more numFound than original, the web page can 
 provide a link with a comment if you can pay additional $100, ${numFound} 
 properties will be found!.
 Today, I need to implement Solr client for this scenario, but this way makes 
 two round trips for showing one page and consistency problem (and laborious 
 of course!).
 I'm thinking a new SearchComponent that can be used with QueryComponent. It 
 does search when numFound of the main query is less than a threshold. Clients 
 can specify via request parameters how the query can be relaxed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1927) DocBuilder Inefficiency

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1927.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 DocBuilder Inefficiency
 ---

 Key: SOLR-1927
 URL: https://issues.apache.org/jira/browse/SOLR-1927
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Robert Zotter
Priority: Trivial
  Labels: DIH, DocBuilder
 Attachments: SOLR-1927.patch


 I am looking into collectDelta method in DocBuilder.java and I noticed that
 to determine the deltaRemoveSet it currently loops through the whole
 deltaSet for each deleted row.
 Does anyone else agree with the fact that this is quite inefficient?
 For delta-imports with a large deltaSet and deletedSet I found a
 considerable improvement in speed if we just save all deleted keys in a set.
 Then we just have to loop through the deltaSet once to determine which rows
 should be removed by checking if the deleted key set contains the delta row
 key.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1861) HTTP Authentication for sharded queries

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1861.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 HTTP Authentication for sharded queries
 ---

 Key: SOLR-1861
 URL: https://issues.apache.org/jira/browse/SOLR-1861
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
 Environment: Solr 1.4
Reporter: Peter Sturge
Priority: Minor
  Labels: authentication, distributed, http, shard
 Attachments: SearchHandler.java, SearchHandler.java


 This issue came out of a requirement to have HTTP authentication for queries. 
 Currently, HTTP authentication works for querying single servers, but it's 
 not possible for distributed searches across multiple shards to receive 
 authenticated http requests.
 This patch adds the option for Solr clients to pass shard-specific http 
 credentials to SearchHandler, which can then use these credentials when 
 making http requests to shards.
 Here's how the patch works:
 A final constant String called {{shardcredentials}} acts as the name of the 
 SolrParams parameter key name.
 The format for the value associated with this key is a comma-delimited list 
 of colon-separated tokens:
 {{   
 shard0:port0:username0:password0,shard1:port1:username1:password1,shardN:portN:usernameN:passwordN
   }}
 A client adds these parameters to their sharded request. 
 In the absence of {{shardcredentials}} and/or matching credentials, the patch 
 reverts to the existing behaviour of using a default http client (i.e. no 
 credentials). This ensures b/w compatibility.
 When SearchHandler receives the request, it passes the 'shardcredentials' 
 parameter to the HttpCommComponent via the submit() method.
 The HttpCommComponent parses the parameter string, and when it finds matching 
 credentials for a given shard, it creates an HttpClient object with those 
 credentials, and then sends the request using this.
 Note: Because the match comparison is a string compare (a.o.t. dns compare), 
 the host/ip names used in the shardcredentials parameters must match those 
 used in the shards parameter.
 Impl Notes:
 This patch is used and tested on the 1.4 release codebase. There weren't any 
 significant diffs between the 1.4 release and the latest trunk for 
 SearchHandler, so should be fine on other trunks, but I've only tested with 
 the 1.4 release code base.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1939) DataImportHandler reports success after running out of disk space

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1939.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 DataImportHandler reports success after running out of disk space
 -

 Key: SOLR-1939
 URL: https://issues.apache.org/jira/browse/SOLR-1939
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.4
Reporter: Wojtek Piaseczny
  Labels: newdev
 Attachments: SOLR-1939.patch, notes_for_SOLR1939.rtf


 DataImportHandler reports success after running out of disk space 
 (.properties file is updated with new time stamps and status page reports 
 success). Out of disk space exceptions should be treated more like 
 datasource-level exceptions than like document-level exceptions (i.e. they 
 should cause the import to report failure).
 Original discussion here:
 http://lucene.472066.n3.nabble.com/DataImportHandler-and-running-out-of-disk-space-td835125.html



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1937) add ability to set a facet.minpercentage (analog to facet.mincount)

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1937.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 add ability to set a facet.minpercentage (analog to facet.mincount)
 ---

 Key: SOLR-1937
 URL: https://issues.apache.org/jira/browse/SOLR-1937
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Lukas Kahwe Smith
Priority: Minor

 See this thread on the ML: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201005.mbox/%3c6ee240dc-7674-4dee-806e-b78ffd558...@pooteeweet.org%3e
 Obviously I could implement this in userland (like mincount as well if it 
 wouldn't be available yet), but I wonder if anyone else see's use in being 
 able to define that a facet must match a minimum percentage of all documents 
 in the result set, rather than a hardcoded value? The idea being that while I 
 might not be interested in a facet that only covers 3 documents in the result 
 set if there are lets say 1000 documents in the result set, the situation 
 would be a lot different if I only have 10 documents in the result set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1871) Function Query map variant that allows target to be an arbitrary ValueSource

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1871.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 Function Query map variant that allows target to be an arbitrary 
 ValueSource
 

 Key: SOLR-1871
 URL: https://issues.apache.org/jira/browse/SOLR-1871
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Chris Harris
 Attachments: ASF.LICENSE.NOT.GRANTED--SOLR-1871.patch, 
 SOLR-1871.patch, SOLR-1871.patch


 Currently, as documented at http://wiki.apache.org/solr/FunctionQuery#map, 
 the target of a map must be a floating point constant. I propose that you 
 should have at least the option of doing a map where the target is an 
 arbitrary ValueSource.
 The particular use case that inspired this is that I want to be able to 
 control how missing date fields affected boosting. In particular, I want to 
 be able to use something like this in my function queries:
 {code}
 map(mydatefield,0,0,ms(NOW))
 {code}
 But this might have other uses.
 I'll attach an initial implementation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5515) NPE when getting stats on date field with empty result on solrcloud

2013-11-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5515.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0

Thanks Alexander!

 NPE when getting stats on date field with empty result on solrcloud
 ---

 Key: SOLR-5515
 URL: https://issues.apache.org/jira/browse/SOLR-5515
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5.1, 4.6
 Environment: ubuntu 13.04, java 1.7.0_45-b18 
Reporter: Alexander Sagen
Assignee: Shalin Shekhar Mangar
Priority: Critical
  Labels: datefield, solrcloud, stats, statscomponent
 Fix For: 5.0, 4.7


 Steps to reproduce:
 1. Download solr 4.6.0, unzip twice in different directories.
 2. Start a two-shard cluster based on default example
 {quote}
 dir1/example java -Dbootstrap_confdir=./solr/collection1/conf 
 -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
 dir2/example java -Djetty.port=7574 -DzkHost=localhost:9983 -jar start.jar
 {quote}
 3. Visit 
 http://localhost:8983/solr/query?q=author:astats=truestats.field=last_modified
 This causes a nullpointer (given that the index is empty or the query returns 
 0 rows)
 Stacktrace:
 {noformat}
 190 [qtp290025410-11] INFO  org.apache.solr.core.SolrCore  – 
 [collection1] webapp=/solr path=/query 
 params={stats.field=last_modifiedstats=trueq=author:a} hits=0 status=500 
 QTime=8 
 191 [qtp290025410-11] ERROR org.apache.solr.servlet.SolrDispatchFilter  – 
 null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.DateStatsValues.updateTypeSpecificStats(StatsValuesFactory.java:409)
 at 
 org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:109)
 at 
 org.apache.solr.handler.component.StatsComponent.handleResponses(StatsComponent.java:113)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 

[jira] [Resolved] (SOLR-1886) In SolrJ, XMLResponseParser throws an exception when attempting to process a response with spellcheck.extendedResults=true containing word suggestions

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-1886.
--

Resolution: Won't Fix

2013 Old JIRA cleanup

 In SolrJ, XMLResponseParser throws an exception when attempting to process a 
 response with spellcheck.extendedResults=true containing word suggestions
 --

 Key: SOLR-1886
 URL: https://issues.apache.org/jira/browse/SOLR-1886
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 1.4
Reporter: Nicholas Brozack

 This occurs when there are word suggestions.
 The error occurs in the readArray method, the message being: error reading 
 value:LST
 The reason for this error is likely that the format of the response with the 
 extendedResults option has unnamed lists.
 Here is an example response where this occurs:
 response
 (response header is here)
 result name=response numFound=0 start=0/
 lst name=spellcheck
  lst name=suggestions
   lst name=emaild
   int name=numFound2/int
   int name=startOffset0/int
   int name=endOffset6/int
   int name=origFreq0/int
   arr name=suggestion
lst
 str name=wordemail/str
 int name=freq2/int
/lst
lst
 str name=wordwebmail/str
 int name=freq1/int
/lst
   /arr
   /lst
   bool name=correctlySpelledfalse/bool
   str name=collationemail/str
  /lst
 /lst
 /response
 Surrounding the suggestions are unnamed lists.
 Considering the method in XMLResponseParser is named readNamedList, I'm 
 guessing that all lists must be named to avoid this error.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Way to not spam when closing obsolete issues?

2013-11-30 Thread Erick Erickson
Hmmm, got ambitious today and flooded the def list closing a bunch of
really old issues. Is there a good way for me to do that without spamming
the dev list with all this junk?

I think people will do bulk closes etc. when releasing versions, so there
must be some way to do this.

Sorry for the noise!
Erick


Re: please put the jiras back?

2013-11-30 Thread Erick Erickson
For Lucene 1.x? Who's going to do anything except ignore them forever?
Anything with patches is so out of date that I'd expect it to be totally
irrelevant.

I can re-open them but it seems unnecessary to leave them hanging around,
it offends my sense of neatness. Hmmm, maybe that's insufficient reason for
me to do anything...


On Sat, Nov 30, 2013 at 8:20 AM, Robert Muir rcm...@gmail.com wrote:

 Erick, why did you mark a ton of LUCENE issues won't fix?

 Just because they've been open for a while?

 Can you please put these back? Closing bugs like
 https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
 has fixed them yet, i dont think that helps anything.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: please put the jiras back?

2013-11-30 Thread Erick Erickson
s/do anything/close them/


On Sat, Nov 30, 2013 at 8:35 AM, Erick Erickson erickerick...@gmail.comwrote:

 For Lucene 1.x? Who's going to do anything except ignore them forever?
 Anything with patches is so out of date that I'd expect it to be totally
 irrelevant.

 I can re-open them but it seems unnecessary to leave them hanging around,
 it offends my sense of neatness. Hmmm, maybe that's insufficient reason for
 me to do anything...


 On Sat, Nov 30, 2013 at 8:20 AM, Robert Muir rcm...@gmail.com wrote:

 Erick, why did you mark a ton of LUCENE issues won't fix?

 Just because they've been open for a while?

 Can you please put these back? Closing bugs like
 https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
 has fixed them yet, i dont think that helps anything.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





Re: please put the jiras back?

2013-11-30 Thread Erick Erickson
Hmmm. I did switch my order by clause to affected version part way through,
so not all of the ones I closed would be version 1/2. Does it make sense to
put _some_ of them back for version XXX?


On Sat, Nov 30, 2013 at 8:40 AM, Erick Erickson erickerick...@gmail.comwrote:

 s/do anything/close them/


 On Sat, Nov 30, 2013 at 8:35 AM, Erick Erickson 
 erickerick...@gmail.comwrote:

 For Lucene 1.x? Who's going to do anything except ignore them forever?
 Anything with patches is so out of date that I'd expect it to be totally
 irrelevant.

 I can re-open them but it seems unnecessary to leave them hanging around,
 it offends my sense of neatness. Hmmm, maybe that's insufficient reason for
 me to do anything...


 On Sat, Nov 30, 2013 at 8:20 AM, Robert Muir rcm...@gmail.com wrote:

 Erick, why did you mark a ton of LUCENE issues won't fix?

 Just because they've been open for a while?

 Can you please put these back? Closing bugs like
 https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
 has fixed them yet, i dont think that helps anything.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org






Re: please put the jiras back?

2013-11-30 Thread Robert Muir
Well for example LUCENE-993 is valid, just nobody fixed it yet.

I dont think we should close such bugs as won't fix just because the
issues have been open for a long time. closing as won't fix because
the issue no longer makes any sense at all is something different.

On Sat, Nov 30, 2013 at 5:35 AM, Erick Erickson erickerick...@gmail.com wrote:
 For Lucene 1.x? Who's going to do anything except ignore them forever?
 Anything with patches is so out of date that I'd expect it to be totally
 irrelevant.

 I can re-open them but it seems unnecessary to leave them hanging around, it
 offends my sense of neatness. Hmmm, maybe that's insufficient reason for me
 to do anything...


 On Sat, Nov 30, 2013 at 8:20 AM, Robert Muir rcm...@gmail.com wrote:

 Erick, why did you mark a ton of LUCENE issues won't fix?

 Just because they've been open for a while?

 Can you please put these back? Closing bugs like
 https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
 has fixed them yet, i dont think that helps anything.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: please put the jiras back?

2013-11-30 Thread Robert Muir
I think you should go back and review all that you did.

Lots of people set tons of affects version: 1.0, 2.0, 3.3, 3.4, 4.4,
.. when reporting bugs.

You cannot infer a thing, just because an old version is set.

On Sat, Nov 30, 2013 at 5:48 AM, Erick Erickson erickerick...@gmail.com wrote:
 Hmmm. I did switch my order by clause to affected version part way through,
 so not all of the ones I closed would be version 1/2. Does it make sense to
 put _some_ of them back for version XXX?


 On Sat, Nov 30, 2013 at 8:40 AM, Erick Erickson erickerick...@gmail.com
 wrote:

 s/do anything/close them/


 On Sat, Nov 30, 2013 at 8:35 AM, Erick Erickson erickerick...@gmail.com
 wrote:

 For Lucene 1.x? Who's going to do anything except ignore them forever?
 Anything with patches is so out of date that I'd expect it to be totally
 irrelevant.

 I can re-open them but it seems unnecessary to leave them hanging around,
 it offends my sense of neatness. Hmmm, maybe that's insufficient reason for
 me to do anything...


 On Sat, Nov 30, 2013 at 8:20 AM, Robert Muir rcm...@gmail.com wrote:

 Erick, why did you mark a ton of LUCENE issues won't fix?

 Just because they've been open for a while?

 Can you please put these back? Closing bugs like
 https://issues.apache.org/jira/browse/LUCENE-993, just because nobody
 has fixed them yet, i dont think that helps anything.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-1952) @since tags missing from Javadocs

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-1952:



 @since tags missing from Javadocs
 -

 Key: LUCENE-1952
 URL: https://issues.apache.org/jira/browse/LUCENE-1952
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/javadocs
Reporter: Chris Pilsworth
Priority: Minor

 It would be useful to be able to see at which version classes/methods have 
 been added by adding the @since javadoc tag.   I use quite an old version of 
 Lucene that is integrated into the CMS I use, and often find that they 
 features I need to use are not supported in the version I have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-1964) InstantiatedIndex : TermFreqVector is missing

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-1964:



 InstantiatedIndex : TermFreqVector is missing
 -

 Key: LUCENE-1964
 URL: https://issues.apache.org/jira/browse/LUCENE-1964
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 2.9
 Environment: java 1.6
Reporter: David Causse
 Attachments: iiw-regression-fix.patch, term-vector-fix.patch


 TermFrecVector is missing when index is created via constructor.
 The constructor expect that fields with TermVector are retreived with the 
 getFields call, but this call returns only stored field, but such fields are 
 never/rarely stored.
 I've attached a patch to fix this issue.
 I had to add a int freq field to InstantiatedTermDocumentInformation because 
 we are not sure we can use the size of termPositions array as freq 
 information, this information may not be available with TermVector.YES.
 Don't know if did well but works with unit test attached.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1444) Add option in solrconfig.xml to override the LogMergePolicy calibrateSizeByDeletes

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-1444:
--


 Add option in solrconfig.xml to override the LogMergePolicy 
 calibrateSizeByDeletes
 

 Key: SOLR-1444
 URL: https://issues.apache.org/jira/browse/SOLR-1444
 Project: Solr
  Issue Type: Improvement
  Components: update
Affects Versions: 1.4
 Environment: NA
Reporter: Jibo John
Priority: Minor

 A patch was committed in lucene  
 (http://issues.apache.org/jira/browse/LUCENE-1634) that would consider the 
 number of deleted documents as the criteria when deciding which segments to 
 merge.
 By default, calibrateSizeByDeletes = false in LogMergePolicy. So, currently, 
 there is no way in Solr to set calibrateSizeByDeletes = true.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1508) Use field cache when creating response, if available and configured.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-1508:
--


 Use field cache when creating response, if available and configured.
 

 Key: SOLR-1508
 URL: https://issues.apache.org/jira/browse/SOLR-1508
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
Reporter: Tom Hill

 Allow the user to configure a field to be returned to the user from the field 
 cache, instead of getting the field from disk. Relies on lucene-1981



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1509) ShowFileRequestHandler has missleading error when asked for absolute path

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-1509:
--


 ShowFileRequestHandler has missleading error when asked for absolute path
 -

 Key: SOLR-1509
 URL: https://issues.apache.org/jira/browse/SOLR-1509
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Simon Rosenthal
Priority: Minor

 When a user attempts to use the ShowFileRequestHandler (ie: /admin/file ) to 
 access a file using an absolute path (which may result from solr.xml 
 containing an absolute path for schema.xml or solrconfig.xml outside of the 
 normal conf dir) then the error message indicates that a file with the path 
 consisting of the confdir + the absolute path can't be found.  the Handler 
 should explicitly check for absolute paths (like it checks for .. and error 
 message should make it clear that absolute paths are not allowed.
 Example of current behavior...
 {noformat}
 schema path = /home/solrdata/rig1/conf/schema.xml
 url displayed in admin form = 
 http://host:port/solr/core1/admin/file/?file=/home/solrdata/rig1/conf/schema.xml
 error message: Can not find: schema.xml 
 [/path/to/core1/conf/directory/home/solrdata/rig1/conf/schema.xml]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-2009) task.mem should be set to use jvmargs that pin the min and max heap size

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-2009:



 task.mem should be set to use jvmargs that pin the min and max heap size
 

 Key: LUCENE-2009
 URL: https://issues.apache.org/jira/browse/LUCENE-2009
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/benchmark
Affects Versions: 2.9
Reporter: Mark Miller
Priority: Minor

 Currently, task.mem sets the java ant task param maxmemory - there is no 
 equivalent minmemory though. jvmargs should be used instead, and xms,xmx 
 pinned to task.mem - otherwise, results are affected as the JVM resizes the 
 heap. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 1050 - Failure!

2013-11-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1050/
Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 8987 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:426: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:406: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:39: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/extra-targets.xml:37: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build.xml:189: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:491: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:413: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:359: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/common-build.xml:379: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/common-build.xml:362: 
impossible to resolve dependencies:
resolve failed - see output for details

Total time: 53 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Reopened] (LUCENE-2033) exposed MultiTermDocs and MultiTermPositions from package protected to public

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-2033:



 exposed MultiTermDocs and MultiTermPositions from package protected to public
 -

 Key: LUCENE-2033
 URL: https://issues.apache.org/jira/browse/LUCENE-2033
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.9
Reporter: John Wang

 making these classes public can help classes that extends MultiReader.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-2511) OutOfMemoryError should not be wrapped in an IllegalStateException, as it is misleading for fault-tolerant programs

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-2511:



 OutOfMemoryError should not be wrapped in an IllegalStateException, as it is 
 misleading for fault-tolerant programs
 ---

 Key: LUCENE-2511
 URL: https://issues.apache.org/jira/browse/LUCENE-2511
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 2.4.1
Reporter: David Sitsky
Priority: Minor

 I have a program, which does explicit commits.  On one occasion, I saw the 
 following exception thrown:
 java.lang.IllegalStateException: this writer hit an OutOfMemoryError; cannot 
 commit
 at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:4061)
 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4136)
 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4114)
 In our program, we treat all errors as fatal and terminate the program (and 
 restart).  Runtime exceptions are sometimes handled differently, since they 
 are usually indicative of a programming bug that might be recoverable. in 
 some situations.
 I think the OutOfMemoryError should not be wrapped as a runtime exception.. 
 as this can mask a serious issue from a fault-tolerant application.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-373) Query parts ending with a colon are handled badly

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-373:
---


 Query parts ending with a colon are handled badly
 -

 Key: LUCENE-373
 URL: https://issues.apache.org/jira/browse/LUCENE-373
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 1.4
 Environment: Operating System: Windows 2000
 Platform: PC
Reporter: Andrew Stevens
Priority: Minor
  Labels: newdev

 I'm using Lucene 1.4.3, running
 Query query = QueryParser.parse(queryString, contents, new 
 StandardAnalyzer());
 If queryString is search title: i.e. specifying a field name without a
 corresponding value, I get a parsing exception:
 Encountered EOF at line 1, column 8.
 Was expecting one of:
 ( ...
 QUOTED ...
 TERM ...
 PREFIXTERM ...
 WILDTERM ...
 [ ...
 { ...
 NUMBER ...
 If queryString is title: search, there's no exception.  However, the parsed
 query which is returned is title:search.  If queryString is title: 
 contents:
 text, the parsed query is title:contents and the text part is ignored
 completely.  When queryString is title: text contents: the above exception 
 is
 produced again.
 This seems inconsistent.  Given that it's pointless searching for an empty
 string (since it has no tokens), I'd expect both search title:  title:
 search to be parsed as search (or, given the default field I specified,
 contents:search), and title: contents: text  title: text contents: to
 parse as text (contents:text) i.e. parts which have no term are ignored.  
 At
 worst I'd expect them all to throw a ParseException rather than just the ones
 with the colon at the end of the string.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-375) fish*~ parses to PrefixQuery - should be a parse exception

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-375:
---


 fish*~ parses to PrefixQuery - should be a parse exception
 --

 Key: LUCENE-375
 URL: https://issues.apache.org/jira/browse/LUCENE-375
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 1.4
 Environment: Operating System: other
 Platform: Other
Reporter: Erik Hatcher
Assignee: Luis Alves
Priority: Minor

 QueryParser parses fish*~ into a fish* PrefixQuery and silently drops the 
 ~.  This really should be a 
 parse exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1514) Facet search results contain 0:0 entries although '0' values were not indexed.

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-1514:
--


 Facet search results contain 0:0 entries although '0' values were not indexed.
 --

 Key: SOLR-1514
 URL: https://issues.apache.org/jira/browse/SOLR-1514
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.3
 Environment: Solr is on: Linux  2.6.18-92.1.13.el5xen
Reporter: Renata Perkowska

 Hi,
 in my Jmeter  ATs  I can see that under some circumstances facet search 
 results contain '0' both as keys
 and values for the integer field called 'year' although I never index zeros. 
 When I do a normal search, I don't see any indexed fields with zeros. 
 When I run my facet test (using JMeter) in isolation, everything works fine. 
 It happens only when it's being run after other tests
 (and other indexing/deleting). On the other hand it shouldn't be the case 
 that other indexing are influencing this test, as at the end of each test I'm 
 deleting
 indexed documents so before running the facet test an index is empty.
 My facet test looks as follows:
  1. Index group of documents
  2. Perform search on facets
  3. Remove documents from the index.
 The results that I'm getting for an integer field 'year':
  1990:4
  1995:4
  0:0
  1991:0
  1992:0
  1993:0
  1994:0
  1996:0
  1997:0
  1998:0
 I'm indexing only values 1990-1999, so there certainly shouldn't be any '0'  
 as keys in the result set.
 The indexed is being optimized not after each document deletion from and 
 index, but only when an index is loaded/unloaded, so the optimization won't 
 solve the problem in this case. 
 If the facet.mincount0 is provided, then  I'm not getting 0:0, but other 
 entries with '0' values are gone as well:
 1990:4
 1995:4
 I'm also indexing text fields, but I don't see a similar situation in this 
 case. This bug only happens for integer fields.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API

2013-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened LUCENE-1997:



 Explore performance of multi-PQ vs single-PQ sorting API
 

 Key: LUCENE-1997
 URL: https://issues.apache.org/jira/browse/LUCENE-1997
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.9
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
 LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
 LUCENE-1997.patch, LUCENE-1997.patch


 Spinoff from recent lucene 2.9 sorting algorithm thread on java-dev,
 where a simpler (non-segment-based) comparator API is proposed that
 gathers results into multiple PQs (one per segment) and then merges
 them in the end.
 I started from John's multi-PQ code and worked it into
 contrib/benchmark so that we could run perf tests.  Then I generified
 the Python script I use for running search benchmarks (in
 contrib/benchmark/sortBench.py).
 The script first creates indexes with 1M docs (based on
 SortableSingleDocSource, and based on wikipedia, if available).  Then
 it runs various combinations:
   * Index with 20 balanced segments vs index with the normal log
 segment size
   * Queries with different numbers of hits (only for wikipedia index)
   * Different top N
   * Different sorts (by title, for wikipedia, and by random string,
 random int, and country for the random index)
 For each test, 7 search rounds are run and the best QPS is kept.  The
 script runs singlePQ then multiPQ, and records the resulting best QPS
 for each and produces table (in Jira format) as output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >