[jira] [Updated] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-4470:
--

Attachment: SOLR-4470.patch

New patch SOLR-4470.patch fitting on current trunk r1576004

Any final comments or wishes before commit?

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5851) Disabling lookups into disabled caches

2014-03-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931448#comment-13931448
 ] 

Shawn Heisey commented on SOLR-5851:


What I just wrote has given me an idea for how we can detect the disabled state 
where the user requested a size of zero -- if the field for the underlying data 
structure is null, we know we can return immediately. After I've had a chance 
to get some sleep, I can double-check some things to make sure this will 
actually work.


 Disabling lookups into disabled caches
 --

 Key: SOLR-5851
 URL: https://issues.apache.org/jira/browse/SOLR-5851
 Project: Solr
  Issue Type: Improvement
Reporter: Otis Gospodnetic
Priority: Minor

 When a cache is disabled, ideally lookups into that cache should be 
 completely disabled, too.
 See: 
 http://search-lucene.com/m/QTPaTfMT52subj=Disabling+lookups+into+disabled+caches



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5851) Disabling lookups into disabled caches

2014-03-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931447#comment-13931447
 ] 

Shawn Heisey commented on SOLR-5851:


I have learned that FastLRUCache will actually create a cache of max size 2 
when you tell it that you want the size to be zero, because of these two lines 
in its init method:

{code}
if (minLimit==0) minLimit=1;
if (limit = minLimit) limit=minLimit+1;
{code}

I initially tried to just add code to the get() and put() methods of all the 
cache implementations that returns immediately if the cache size is zero, but 
that didn't work and broke most of the tests.  In hindsight, this makes sense - 
it was not the right solution.

If we're going to implement this idea, we need some other way to detect that 
the *requested* cache size was zero so we can short-circuit all the cache 
methods into returning immediately and not taking any action.  Ideally, we 
would also avoid creating the underlying data structure at all.


 Disabling lookups into disabled caches
 --

 Key: SOLR-5851
 URL: https://issues.apache.org/jira/browse/SOLR-5851
 Project: Solr
  Issue Type: Improvement
Reporter: Otis Gospodnetic
Priority: Minor

 When a cache is disabled, ideally lookups into that cache should be 
 completely disabled, too.
 See: 
 http://search-lucene.com/m/QTPaTfMT52subj=Disabling+lookups+into+disabled+caches



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1125: POMs out of sync

2014-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1125/

No tests ran.

Build Log:
[...truncated 28789 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:483: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:164: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:582:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:440:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1447:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:530:
 Error deploying artifact 'org.apache.solr:solr-analysis-extras:jar': Error 
retrieving previous build number for artifact 
'org.apache.solr:solr-analysis-extras:jar': repository metadata for: 'snapshot 
org.apache.solr:solr-analysis-extras:5.0-SNAPSHOT' could not be retrieved from 
repository: apache.snapshots.https due to an error: Error transferring file: 
Server returned HTTP response code: 503 for URL: 
https://repository.apache.org/content/repositories/snapshots/org/apache/solr/solr-analysis-extras/5.0-SNAPSHOT/maven-metadata.xml

Total time: 21 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5477) Async execution of OverseerCollectionProcessor tasks

2014-03-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5477:
---

Attachment: SOLR-5477.patch

Patch with SolrJ support and tests.

 Async execution of OverseerCollectionProcessor tasks
 

 Key: SOLR-5477
 URL: https://issues.apache.org/jira/browse/SOLR-5477
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Anshum Gupta
 Attachments: SOLR-5477-CoreAdminStatus.patch, 
 SOLR-5477-updated.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch


 Typical collection admin commands are long running and it is very common to 
 have the requests get timed out.  It is more of a problem if the cluster is 
 very large.Add an option to run these commands asynchronously
 add an extra param async=true for all collection commands
 the task is written to ZK and the caller is returned a task id. 
 as separate collection admin command will be added to poll the status of the 
 task
 command=statusid=7657668909
 if id is not passed all running async tasks should be listed
 A separate queue is created to store in-process tasks . After the tasks are 
 completed the queue entry is removed. OverSeerColectionProcessor will perform 
 these tasks in multiple threads



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-12 Thread Rob Audenaerde (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931510#comment-13931510
 ] 

Rob Audenaerde commented on LUCENE-5476:


Hi all,

Making good progress, only I'm not really sure what to do with the {{scores}}. 
I could only keep the scores of the sampled documents (creating a new 
{{scores[]}} in the {{createSample}}. Or just leave scoring out completely for 
the sampler? (Passing keepscores = false, removing the c'tor param, setting 
{[scores}} to {{null}}?

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931530#comment-13931530
 ] 

Jan Høydahl commented on SOLR-4470:
---

There's a TODO in SystemPropertiesAuthCredentialsInternalRequestFactory
{blockquote}
TODO since internalAuthCredentials is something you use for internal requests 
against other Solr-nodes it should never
have different values for different Solr-nodes in the same cluster, and 
therefore the credentials ought to be specified
on a global level (e.g. in ZK) instead of on a per node level as VM-params are
{blockquote}

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-12 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931543#comment-13931543
 ] 

Shai Erera commented on LUCENE-5476:


If it's not too much work for you, I think you should just create a new 
float[]? You can separate the code such that you don't check if needs to keep 
scores for every sampled document, at the cost of duplicating code. But 
otherwise I think it would be good if we kept that functionality for sampled 
docs too.

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, SamplingComparison_SamplingFacetsCollector.java, 
 SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5501) Ability to work with cold replicas

2014-03-12 Thread Manuel Lenormand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931540#comment-13931540
 ] 

Manuel Lenormand commented on SOLR-5501:


Alright. Can I count on the cold_replica role to be in the clusterstate.json 
with the other replica properties?

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.7


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5800) Admin UI - Analysis form doesn't render results correctly when a CharFilter is used.

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931555#comment-13931555
 ] 

ASF subversion and git services commented on SOLR-5800:
---

Commit 1576652 from [~steffkes] in branch 'dev/trunk'
[ https://svn.apache.org/r1576652 ]

SOLR-5800: Admin UI - Analysis form doesn't render results correctly when a 
CharFilter is used

 Admin UI - Analysis form doesn't render results correctly when a CharFilter 
 is used.
 

 Key: SOLR-5800
 URL: https://issues.apache.org/jira/browse/SOLR-5800
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.7
Reporter: Timothy Potter
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5800-sample.json, SOLR-5800.patch


 I have an example in Solr In Action that uses the
 PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
 Specifically, the fieldType is:
 fieldType name=text_microblog class=solr.TextField
 positionIncrementGap=100
   analyzer
 charFilter class=solr.PatternReplaceCharFilterFactory
 pattern=([a-zA-Z])\1+
 replacement=$1$1/
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1
 splitOnCaseChange=0
 splitOnNumerics=0
 stemEnglishPossessive=1
 preserveOriginal=0
 catenateWords=1
 generateNumberParts=1
 catenateNumbers=0
 catenateAll=0
 types=wdfftypes.txt/
 filter class=solr.StopFilterFactory
 ignoreCase=true
 words=lang/stopwords_en.txt
 /
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.ASCIIFoldingFilterFactory/
 filter class=solr.KStemFilterFactory/
   /analyzer
 /fieldType
 The PatternReplaceCharFilterFactory (PRCF) is used to collapse
 repeated letters in a term down to a max of 2, such as #yu would
 be #yumm
 When I run some text through this analyzer using the Analysis form,
 the output is as if the resulting text is unavailable to the
 tokenizer. In other words, the only results being displayed in the
 output on the form is for the PRCF
 This example stopped working in 4.7.0 and I've verified it worked
 correctly in 4.6.1.
 Initially, I thought this might be an issue with the actual analysis,
 but the analyzer actually works when indexing / querying. Then,
 looking at the JSON response in the Developer console with Chrome, I
 see the JSON that comes back includes output for all the components in
 my chain (see below) ... so looks like a UI rendering issue to me?
 {responseHeader:{status:0,QTime:24},analysis:{field_types:{text_microblog:{index:[org.apache.lucene.analysis.pattern.PatternReplaceCharFilter,#Yumm
 :) Drinking a latte at Caffe Grecco in SF's historic North Beach...
 Learning text analysis with #SolrInAction by @ManningBooks on my i-Pad
 foo5,org.apache.lucene.analysis.core.WhitespaceTokenizer,[{text:#Yumm,raw_bytes:[23
 59 75 6d 
 6d],start:0,end:6,position:1,positionHistory:[1],type:word},{text::),raw_bytes:[3a
 29],start:7,end:9,position:2,positionHistory:[2],type:word},{text:Drinking,raw_bytes:[44
 72 69 6e 6b 69 6e
 67],start:10,end:18,position:3,positionHistory:[3],type:word},{text:a,raw_bytes:[61],start:19,end:20,position:4,positionHistory:[4],type:word},{text:latte,raw_bytes:[6c
  ...
 the JSON returned to the browser has evidence that the full analysis chain 
 was applied, so this seems to just be a rendering issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931561#comment-13931561
 ] 

Per Steffensen commented on SOLR-4470:
--

About the TODO above. I agree! But it can be a step 2.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5800) Admin UI - Analysis form doesn't render results correctly when a CharFilter is used.

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931575#comment-13931575
 ] 

ASF subversion and git services commented on SOLR-5800:
---

Commit 1576671 from [~steffkes] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576671 ]

SOLR-5800: Admin UI - Analysis form doesn't render results correctly when a 
CharFilter is used (merge r1576652)

 Admin UI - Analysis form doesn't render results correctly when a CharFilter 
 is used.
 

 Key: SOLR-5800
 URL: https://issues.apache.org/jira/browse/SOLR-5800
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.7
Reporter: Timothy Potter
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5800-sample.json, SOLR-5800.patch


 I have an example in Solr In Action that uses the
 PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
 Specifically, the fieldType is:
 fieldType name=text_microblog class=solr.TextField
 positionIncrementGap=100
   analyzer
 charFilter class=solr.PatternReplaceCharFilterFactory
 pattern=([a-zA-Z])\1+
 replacement=$1$1/
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1
 splitOnCaseChange=0
 splitOnNumerics=0
 stemEnglishPossessive=1
 preserveOriginal=0
 catenateWords=1
 generateNumberParts=1
 catenateNumbers=0
 catenateAll=0
 types=wdfftypes.txt/
 filter class=solr.StopFilterFactory
 ignoreCase=true
 words=lang/stopwords_en.txt
 /
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.ASCIIFoldingFilterFactory/
 filter class=solr.KStemFilterFactory/
   /analyzer
 /fieldType
 The PatternReplaceCharFilterFactory (PRCF) is used to collapse
 repeated letters in a term down to a max of 2, such as #yu would
 be #yumm
 When I run some text through this analyzer using the Analysis form,
 the output is as if the resulting text is unavailable to the
 tokenizer. In other words, the only results being displayed in the
 output on the form is for the PRCF
 This example stopped working in 4.7.0 and I've verified it worked
 correctly in 4.6.1.
 Initially, I thought this might be an issue with the actual analysis,
 but the analyzer actually works when indexing / querying. Then,
 looking at the JSON response in the Developer console with Chrome, I
 see the JSON that comes back includes output for all the components in
 my chain (see below) ... so looks like a UI rendering issue to me?
 {responseHeader:{status:0,QTime:24},analysis:{field_types:{text_microblog:{index:[org.apache.lucene.analysis.pattern.PatternReplaceCharFilter,#Yumm
 :) Drinking a latte at Caffe Grecco in SF's historic North Beach...
 Learning text analysis with #SolrInAction by @ManningBooks on my i-Pad
 foo5,org.apache.lucene.analysis.core.WhitespaceTokenizer,[{text:#Yumm,raw_bytes:[23
 59 75 6d 
 6d],start:0,end:6,position:1,positionHistory:[1],type:word},{text::),raw_bytes:[3a
 29],start:7,end:9,position:2,positionHistory:[2],type:word},{text:Drinking,raw_bytes:[44
 72 69 6e 6b 69 6e
 67],start:10,end:18,position:3,positionHistory:[3],type:word},{text:a,raw_bytes:[61],start:19,end:20,position:4,positionHistory:[4],type:word},{text:latte,raw_bytes:[6c
  ...
 the JSON returned to the browser has evidence that the full analysis chain 
 was applied, so this seems to just be a rendering issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-03-12 Thread Nikhil Chhaochharia (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931580#comment-13931580
 ] 

Nikhil Chhaochharia commented on LUCENE-5205:
-

We will try reducing the stop words to some impossible token and report back in 
a few days.

We need the user fields and a few other features of the edismax parser, hence 
we have modified it to send only 'phrase' queries to SpanQueryParser. Its a 
huge hack but we would like include this functionality without the overhead of 
building our own parser from scratch.

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.7

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5800) Admin UI - Analysis form doesn't render results correctly when a CharFilter is used.

2014-03-12 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-5800.
-

Resolution: Fixed

 Admin UI - Analysis form doesn't render results correctly when a CharFilter 
 is used.
 

 Key: SOLR-5800
 URL: https://issues.apache.org/jira/browse/SOLR-5800
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.7
Reporter: Timothy Potter
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5800-sample.json, SOLR-5800.patch


 I have an example in Solr In Action that uses the
 PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
 Specifically, the fieldType is:
 fieldType name=text_microblog class=solr.TextField
 positionIncrementGap=100
   analyzer
 charFilter class=solr.PatternReplaceCharFilterFactory
 pattern=([a-zA-Z])\1+
 replacement=$1$1/
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1
 splitOnCaseChange=0
 splitOnNumerics=0
 stemEnglishPossessive=1
 preserveOriginal=0
 catenateWords=1
 generateNumberParts=1
 catenateNumbers=0
 catenateAll=0
 types=wdfftypes.txt/
 filter class=solr.StopFilterFactory
 ignoreCase=true
 words=lang/stopwords_en.txt
 /
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.ASCIIFoldingFilterFactory/
 filter class=solr.KStemFilterFactory/
   /analyzer
 /fieldType
 The PatternReplaceCharFilterFactory (PRCF) is used to collapse
 repeated letters in a term down to a max of 2, such as #yu would
 be #yumm
 When I run some text through this analyzer using the Analysis form,
 the output is as if the resulting text is unavailable to the
 tokenizer. In other words, the only results being displayed in the
 output on the form is for the PRCF
 This example stopped working in 4.7.0 and I've verified it worked
 correctly in 4.6.1.
 Initially, I thought this might be an issue with the actual analysis,
 but the analyzer actually works when indexing / querying. Then,
 looking at the JSON response in the Developer console with Chrome, I
 see the JSON that comes back includes output for all the components in
 my chain (see below) ... so looks like a UI rendering issue to me?
 {responseHeader:{status:0,QTime:24},analysis:{field_types:{text_microblog:{index:[org.apache.lucene.analysis.pattern.PatternReplaceCharFilter,#Yumm
 :) Drinking a latte at Caffe Grecco in SF's historic North Beach...
 Learning text analysis with #SolrInAction by @ManningBooks on my i-Pad
 foo5,org.apache.lucene.analysis.core.WhitespaceTokenizer,[{text:#Yumm,raw_bytes:[23
 59 75 6d 
 6d],start:0,end:6,position:1,positionHistory:[1],type:word},{text::),raw_bytes:[3a
 29],start:7,end:9,position:2,positionHistory:[2],type:word},{text:Drinking,raw_bytes:[44
 72 69 6e 6b 69 6e
 67],start:10,end:18,position:3,positionHistory:[3],type:word},{text:a,raw_bytes:[61],start:19,end:20,position:4,positionHistory:[4],type:word},{text:latte,raw_bytes:[6c
  ...
 the JSON returned to the browser has evidence that the full analysis chain 
 was applied, so this seems to just be a rendering issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5800) Admin UI - Analysis form doesn't render results correctly when a CharFilter is used.

2014-03-12 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931586#comment-13931586
 ] 

Stefan Matheis (steffkes) commented on SOLR-5800:
-

Hey Doug, that depends a bit - 'the next available release' i'd say, might be 
4.7.1 if it's needed otherwise it would be 4.8

 Admin UI - Analysis form doesn't render results correctly when a CharFilter 
 is used.
 

 Key: SOLR-5800
 URL: https://issues.apache.org/jira/browse/SOLR-5800
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.7
Reporter: Timothy Potter
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5800-sample.json, SOLR-5800.patch


 I have an example in Solr In Action that uses the
 PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
 Specifically, the fieldType is:
 fieldType name=text_microblog class=solr.TextField
 positionIncrementGap=100
   analyzer
 charFilter class=solr.PatternReplaceCharFilterFactory
 pattern=([a-zA-Z])\1+
 replacement=$1$1/
 tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1
 splitOnCaseChange=0
 splitOnNumerics=0
 stemEnglishPossessive=1
 preserveOriginal=0
 catenateWords=1
 generateNumberParts=1
 catenateNumbers=0
 catenateAll=0
 types=wdfftypes.txt/
 filter class=solr.StopFilterFactory
 ignoreCase=true
 words=lang/stopwords_en.txt
 /
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.ASCIIFoldingFilterFactory/
 filter class=solr.KStemFilterFactory/
   /analyzer
 /fieldType
 The PatternReplaceCharFilterFactory (PRCF) is used to collapse
 repeated letters in a term down to a max of 2, such as #yu would
 be #yumm
 When I run some text through this analyzer using the Analysis form,
 the output is as if the resulting text is unavailable to the
 tokenizer. In other words, the only results being displayed in the
 output on the form is for the PRCF
 This example stopped working in 4.7.0 and I've verified it worked
 correctly in 4.6.1.
 Initially, I thought this might be an issue with the actual analysis,
 but the analyzer actually works when indexing / querying. Then,
 looking at the JSON response in the Developer console with Chrome, I
 see the JSON that comes back includes output for all the components in
 my chain (see below) ... so looks like a UI rendering issue to me?
 {responseHeader:{status:0,QTime:24},analysis:{field_types:{text_microblog:{index:[org.apache.lucene.analysis.pattern.PatternReplaceCharFilter,#Yumm
 :) Drinking a latte at Caffe Grecco in SF's historic North Beach...
 Learning text analysis with #SolrInAction by @ManningBooks on my i-Pad
 foo5,org.apache.lucene.analysis.core.WhitespaceTokenizer,[{text:#Yumm,raw_bytes:[23
 59 75 6d 
 6d],start:0,end:6,position:1,positionHistory:[1],type:word},{text::),raw_bytes:[3a
 29],start:7,end:9,position:2,positionHistory:[2],type:word},{text:Drinking,raw_bytes:[44
 72 69 6e 6b 69 6e
 67],start:10,end:18,position:3,positionHistory:[3],type:word},{text:a,raw_bytes:[61],start:19,end:20,position:4,positionHistory:[4],type:word},{text:latte,raw_bytes:[6c
  ...
 the JSON returned to the browser has evidence that the full analysis chain 
 was applied, so this seems to just be a rendering issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5519:


Attachment: LUCENE-5519.patch

next iteration... I added a new class TopResults that encodes the info if it's 
complete or not and added asssertions where appropriate.

 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5520) ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted parent without any children

2014-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-5520:
--

Assignee: Michael McCandless

 ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted 
 parent without any children
 -

 Key: LUCENE-5520
 URL: https://issues.apache.org/jira/browse/LUCENE-5520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 4.2, 4.7
Reporter: Sally Ang
Assignee: Michael McCandless
 Attachments: TestBlockJoin.patch, non working patch.patch, 
 testout.txt, working patch.patch


 This problem is found in lucene 4.2.0 and reproduced in 4.7.0
 In our app when we delete a document we always delete all the children. 
 But not all parents have children. The exception happen for us when the 
 parent without children is deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5520) ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted parent without any children

2014-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931665#comment-13931665
 ] 

Michael McCandless commented on LUCENE-5520:


Unfortunately, we are not allowed to check acceptDocs with parent docIDs: that 
bitset is only valid for child documents.  This is because the primary search 
is against children, and IndexSearcher could pass a Filter down low as the 
acceptDocs.

This also means that your app really must delete all child documents for a 
given parent, if you never want to see that parent; but really it's best to 
delete parent + all children, whenever you want to delete.

I have an idea for a possible fix ... I'll test and post a patch.

 ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted 
 parent without any children
 -

 Key: LUCENE-5520
 URL: https://issues.apache.org/jira/browse/LUCENE-5520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 4.2, 4.7
Reporter: Sally Ang
Assignee: Michael McCandless
 Attachments: TestBlockJoin.patch, non working patch.patch, 
 testout.txt, working patch.patch


 This problem is found in lucene 4.2.0 and reproduced in 4.7.0
 In our app when we delete a document we always delete all the children. 
 But not all parents have children. The exception happen for us when the 
 parent without children is deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5520) ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted parent without any children

2014-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5520:
---

Attachment: LUCENE-5220.patch

Sally, could you try this patch?

I added a check, when we try to jump to the first child for a given doc, to 
detect the case when that parent has 0 child docs, and then continue in the 
parent loop if so.

 ArrayIndexOutOfBoundException in ToChildBlockJoinQuery when there's a deleted 
 parent without any children
 -

 Key: LUCENE-5520
 URL: https://issues.apache.org/jira/browse/LUCENE-5520
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/join
Affects Versions: 4.2, 4.7
Reporter: Sally Ang
Assignee: Michael McCandless
 Attachments: LUCENE-5220.patch, TestBlockJoin.patch, non working 
 patch.patch, testout.txt, working patch.patch


 This problem is found in lucene 4.2.0 and reproduced in 4.7.0
 In our app when we delete a document we always delete all the children. 
 But not all parents have children. The exception happen for us when the 
 parent without children is deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931530#comment-13931530
 ] 

Jan Høydahl edited comment on SOLR-4470 at 3/12/14 11:49 AM:
-

([~steff1193] I did not complete my comment earlier)

Checking *TODO*s in the patch

h3. In SystemPropertiesAuthCredentialsInternalRequestFactory

bq. TODO since internalAuthCredentials is something you use for internal 
requests against other Solr-nodes it should never have different values for 
different Solr-nodes in the same cluster, and therefore the credentials ought 
to be specified on a global level (e.g. in ZK) instead of on a per node level 
as VM-params are

Guess this should not be a TODO here since we're inside the SysProp impl, but 
rather open a JIRA for a ZK auth cred impl once this gets in

h3. In SecurityDistributedTest:

{code}
// TODO It ought to have been 403 below instead of -1, but things are just 
crappy with respect to 403 handling around the code
doAndAssertSolrExeption(-1 /*403*/, new CallableObject() {
{code}

Do you remember why you put this comment instead of cleaning up the code?

{code}
/* TODO Seems like the single control-node is sending requests to itself in 
order to handle get!?
  controlClient.query(params, METHOD.GET, SEARCH_CREDENTIALS);*/
{code}

Dead code, better remove it, or is there something to clarify?

{code}
// TODO: REMOVE THIS SLEEP WHEN WE HAVE COLLECTION API RESPONSES
Thread.sleep(1);
{code}

There is SOLR-4577 which seems to be fixed already, can we perhaps spin off 
another JIRA to add a return status from 
{{AbstractFullDistribZkTestBase#createCollection()}} and friends. That way we 
can avoid adding another 10s of sleep through this test

h3. In AuthCredentials

{code}
// TODO we ought to test if authMethods is already unmodifiable and not wrap it 
if it is, but I hope/guess
// Collections.unmodifiableSet will do that internally - I found no way to test 
if a Set is unmodifiable
this.authMethods = Collections.unmodifiableSet(authMethods);
{code}

Anyone who *knows* if this is safe? If so better remove the whole TODO


was (Author: janhoy):
There's a TODO in SystemPropertiesAuthCredentialsInternalRequestFactory
{blockquote}
TODO since internalAuthCredentials is something you use for internal requests 
against other Solr-nodes it should never
have different values for different Solr-nodes in the same cluster, and 
therefore the credentials ought to be specified
on a global level (e.g. in ZK) instead of on a per node level as VM-params are
{blockquote}

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931674#comment-13931674
 ] 

Jan Høydahl commented on SOLR-4470:
---

I think at least we should fix the sleep(1) as a separate JIRA *before* 
committing this patch.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-12 Thread Erick Erickson
OK, it looks like Java 7 starting with Solr 4.8 is going to happen.
May I suggest we announce this sooner rather than later? Perhaps
starting with an announcement on the user's list Real Soon Now? Like
before we break people's builds that rely on Java 1.6 with a checkin
to 4x?

There will be organizations for which this is a total deal-killer for
4.8. I'm _not_ advocating that we stay on 1.6 because of that, rather
that we give them a chance to start planning/adjusting soon.

Additionally, there's a fair likelihood that some of the organizations
stuck on 1.6 have a long vetting process before 1.7 would be used, so
the more time they have the better if they consider Solr
mission-critical.

On Tue, Mar 11, 2014 at 12:31 PM, Mike Murphy mmurphy3...@gmail.com wrote:
 On Tue, Mar 11, 2014 at 6:11 AM, Grant Ingersoll gsing...@apache.org wrote:

 On Mar 8, 2014, at 11:17 AM, Uwe Schindler u...@thetaphi.de wrote:

 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java 
 7-related issues (FileChannel improvements, diamond operator,...).


 -0 -- Seems a little odd that we would force an upgrade on a minor version, 
 which is not usually seen as best practice in development.

 I agree.  I also do not see it making a difference to potential developers.
 What are the benefits to the project?  A developer will not make their
 decision to get involved in Lucene/Solr based on branch4x being Java 6
 vs Java 7.
 If causes some users to not upgrade, that's also a bad thing for the project.

 -Mike

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931678#comment-13931678
 ] 

Michael McCandless commented on LUCENE-5519:


Hmm, I don't like that we pre-allocate the full array[topN] here.  Can we just 
use a ListX for the input/output pairs?  Or maybe just to back to 
ListMinResult?

It's also sort of strange to have spaceNeeded/isFull methods: it makes this 
class more like a queue and less like a set of final results.  I'd prefer if it 
were more like TopDocs: its purpose is to simply deliver results.  I think 
those methods/queue state tracking should be outside of that class.



 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5851) Disabling lookups into disabled caches

2014-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931694#comment-13931694
 ] 

Yonik Seeley commented on SOLR-5851:


Simply remove the whole cache  declaration if you don't want the cache.

 Disabling lookups into disabled caches
 --

 Key: SOLR-5851
 URL: https://issues.apache.org/jira/browse/SOLR-5851
 Project: Solr
  Issue Type: Improvement
Reporter: Otis Gospodnetic
Priority: Minor

 When a cache is disabled, ideally lookups into that cache should be 
 completely disabled, too.
 See: 
 http://search-lucene.com/m/QTPaTfMT52subj=Disabling+lookups+into+disabled+caches



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5506) Ignoring the Return Values Of Immutable Objects

2014-03-12 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931701#comment-13931701
 ] 

Furkan KAMACI commented on LUCENE-5506:
---

[~mikemccand] I've debugged the code and I see that optimize() method at 
Reduce.java has bug (I think so). Right now args[0] is not upper cased properly 
so optimize (removing the holes in the rows of the given trie) did not into 
take count. Because of optimize() method never runs the bug is not realized. If 
the bug is resolved stemmer may work faster. I will open a new Jira for it.

 Ignoring the Return Values Of Immutable Objects
 ---

 Key: LUCENE-5506
 URL: https://issues.apache.org/jira/browse/LUCENE-5506
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5506.patch


 I was checking the source code of Lucene and I realized that return values of 
 immutable objects are ignored at CSVUtil.java and Compile.java as follows:
 *CSVUtil.java*:
 {code}
   /**
* Quote and escape input value for CSV
*/
   public static String quoteEscape(String original) {
 String result = original;
 
 if (result.indexOf('\') = 0) {
   result.replace(\, ESCAPED_QUOTE);
 }
 if(result.indexOf(COMMA) = 0) {
   result = \ + result + \;
 }
 return result;
   }
 {code}
 *Compile.java*
 {code}
 if (args.length  1) {
   return;
 }
 args[0].toUpperCase(Locale.ROOT);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5521) Egothor Stemmer Bug for Optimizing (removing holes in the rows) for the given Trie

2014-03-12 Thread Furkan KAMACI (JIRA)
Furkan KAMACI created LUCENE-5521:
-

 Summary: Egothor Stemmer Bug for Optimizing (removing holes in the 
rows) for the given Trie
 Key: LUCENE-5521
 URL: https://issues.apache.org/jira/browse/LUCENE-5521
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8


main method at Compile.java has that lines:

{code}
args[0].toUpperCase(Locale.ROOT);
{code}

I've fixed it with LUCENE-5506 However optimize method does not work correctly 
and TestCompile.java throws error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5852) Add CloudSolrServer helper method to connect to a ZK ensemble

2014-03-12 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-5852:
---

 Summary: Add CloudSolrServer helper method to connect to a ZK 
ensemble
 Key: SOLR-5852
 URL: https://issues.apache.org/jira/browse/SOLR-5852
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker


We should have a CloudSolrServer constructor which takes a list of ZK servers 
to connect to.

Something Like 
{noformat}
public CloudSolrServer(String... zkHost);
{noformat}

- Document the current constructor better to mention that to connect to a ZK 
ensemble you can pass a comma-delimited list of ZK servers like 
zk1:2181,zk2:2181,zk3:2181

- Thirdly should getLbServer() and getZKStatereader() be public?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5032) Implement tool and/or API for moving a replica to a specific node

2014-03-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931714#comment-13931714
 ] 

Shalin Shekhar Mangar commented on SOLR-5032:
-

Hi Furkan, look at SOLR-5128 where we are trying to have better cluster 
management APIs. I think that now that we have an addReplica and deleteReplica 
API, move can be implemented as a wrapper over them.

 Implement tool and/or API for moving a replica to a specific node
 -

 Key: SOLR-5032
 URL: https://issues.apache.org/jira/browse/SOLR-5032
 Project: Solr
  Issue Type: New Feature
Reporter: Otis Gospodnetic
Priority: Minor

 See http://search-lucene.com/m/Sri8gFljGw



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-12 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-5419:


Description: 
When you make a query into Solr via Solr Admin Page and if an error occurs 
there writes Loading.. and nothing happens. 

i.e. if you write an invalid Request Handler at Query page even response is 404 
Not Found Loading... is still there.

  was:
When you make a query into Solr via Solr Admin Page and if error occurs there 
writes Loading.. and does nothing. 

i.e. if you write an invalid Request Handler at Query page even response is 404 
Not Found Loading... is still there.


 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch


 When you make a query into Solr via Solr Admin Page and if an error occurs 
 there writes Loading.. and nothing happens. 
 i.e. if you write an invalid Request Handler at Query page even response is 
 404 Not Found Loading... is still there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-12 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated SOLR-5419:


Description: 
When you make a query into Solr via Solr Admin Page and if an error occurs 
there writes Loading.. and nothing happens. 

i.e. if you write an invalid Request Handler (something like /select 
instead of /select) at Query page and even response is 404 Not Found you will 
see that Loading... is still there and you will not able to understand 
whether an error occurred or the response is so slow at first glance.

  was:
When you make a query into Solr via Solr Admin Page and if an error occurs 
there writes Loading.. and nothing happens. 

i.e. if you write an invalid Request Handler at Query page even response is 404 
Not Found Loading... is still there.


 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch


 When you make a query into Solr via Solr Admin Page and if an error occurs 
 there writes Loading.. and nothing happens. 
 i.e. if you write an invalid Request Handler (something like /select 
 instead of /select) at Query page and even response is 404 Not Found you will 
 see that Loading... is still there and you will not able to understand 
 whether an error occurred or the response is so slow at first glance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Admin UI Query Result Does Nothing at Error

2014-03-12 Thread Furkan KAMACI
Hi;


When you make a query into Solr via Solr Admin Page and if an error occurs
there writes Loading.. and nothing happens.

i.e. if you write an invalid Request Handler (something like /select
instead of /select) at Query page and even response is 404 Not Found you
will see that Loading... is still there and you will not able to
understand whether an error occurred or the response is so slow at first
glance.

I've resolved that issue with SOLR-5419 and you can check it.

Thanks;
Furkan KAMACI


[GitHub] lucene-solr pull request: Removal of Scorer.weight

2014-03-12 Thread shebiki
Github user shebiki commented on the pull request:

https://github.com/apache/lucene-solr/pull/40#issuecomment-37405412
  
Robert: Great, is there anything I can do to help better prep the commit 
(db57c80) for that?

mkhludenv (your first name is not on your Github profile page): 
Interesting. There is only one Scorer and 3 tests in Lucene/Solr that took 
advantage of Scorer.getWeight() and bunches of code that had to pass through a 
weight or null to Scorer's constructor. I assume the custom queries that you 
mention can not just use the same approach I used when tweaking 
ToParentBlockJoinQuery (00740d6)? This works because the Scorer is trying to 
get to the Query from the Weight that created it. Do your custom queries try to 
access Query objects from Scorers that they didn't create? If so would you mind 
sharing a little more information about how you wire that up? I'd love to 
understand your use case better.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5514:
--

Attachment: LUCENE-5514.patch

More fixes after synchronizing build files (they went a little bit out of 
order). Now also missing stuff like IDEA and Netbeans are correct.

I will commit this soon, the 72 hours vote is over.

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-12 Thread Uwe Schindler
Hi,

the vote is over after 72 hours. The results:

- Almost all voting committers want to move to Java 7 - only Grant Ingersoll 
said -0. So I declare this as succeeded vote.
I will now proceed with committing the backports (LUCENE-5514). The Jenkins 
infrastructure is already upgraded. I will also add a note to the Lucene/Solr 
webpage to announce that Lucene/Solr 4.8 will be Java 7 minimum. I will also 
send mail to the *-user mailing lists.

- Most of the committers were against moving to Java 8 in trunk, but we 
decided, to call the vote again in a few months.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Saturday, March 08, 2014 5:17 PM
 To: dev@lucene.apache.org
 Subject: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once
 officially released)
 
 Hi all,
 
 Java 8 will get released (hopefully, but I trust the release plan!) on March 
 18,
 2014. Because of this, lots of developers will move to Java 8, too. This makes
 maintaining 3 versions for developing Lucene 4.x not easy anymore (unless
 you have cool JAVA_HOME cmd launcher scripts using StExBar available for
 your Windows Explorer - or similar stuff in Linux/Mäc).
 
 We already discussed in another thread about moving to release trunk as 5.0,
 but people disagreed and preferred to release 4.8 with a minimum of Java 7.
 This is perfectly fine, as nobody should run Lucene or Solr on an unsupported
 platform anymore. If they upgrade to 4.8, they should also upgrade their
 infrastructure - this is a no-brainer. In Lucene trunk we switch to Java 8 as
 soon as it is released (in 10 days).
 
 Now the good things: We don't need to support JRockit anymore, no need to
 support IBM J9 in trunk (unless they release a new version based on Java 8).
 
 So the vote here is about:
 
 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java 7-
 related issues (FileChannel improvements, diamond operator,...).
 [.] Move Lucene/Solr trunk to Java 8 and allow closures in source code. This
 would make some APIs much nicer. Our infrastructure mostly supports this,
 only ECJ Javadoc linting is not yet possible, but forbidden-apis supports 
 Java 8
 with all its crazy new stuff.
 
 You can vote separately for both items!
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931740#comment-13931740
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576728 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576728 ]

LUCENE-5514: Move to Java 7 on branch_4x. There will come more commits to move 
changes entries and documentation.

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931743#comment-13931743
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576731 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576731 ]

Merged revision(s) 1576729 from lucene/dev/trunk:
LUCENE-5514: Remove outdated constants

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931742#comment-13931742
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576729 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1576729 ]

LUCENE-5514: Remove outdated constants

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Removal of Scorer.weight

2014-03-12 Thread rmuir
Github user rmuir commented on the pull request:

https://github.com/apache/lucene-solr/pull/40#issuecomment-37408333
  
I can take care, i just want to do a proper review first and I ran out of 
time yesterday.

As far as Scorer.getWeight, the tests may not expose this so much, but the 
idea is that you can connect Scorers to e.g. the Query objects that own them. 
This can be useful in custom Collectors, for example.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931755#comment-13931755
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576736 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1576736 ]

LUCENE-5514: Update changes.txt

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931759#comment-13931759
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576737 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576737 ]

Merged revision(s) 1576736 from lucene/dev/trunk:
LUCENE-5514: Update changes.txt

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Removal of Scorer.weight

2014-03-12 Thread mkhludnev
Github user mkhludnev commented on the pull request:

https://github.com/apache/lucene-solr/pull/40#issuecomment-37409000
  
Terry,
Yep, passing Weight everywhere might be overwhelming. My case for 
scorer.weight.query usage, is own drill-sideway facet collector. I run standard 
BooleanQuery like +Brand:DG Color:Red Size:XL with minShouldMatch=1. When 
collector.collect(int doc) is called it checks child scores positions to 
understand whether it Red or XL hit. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5518) minor hunspell optimizations

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931762#comment-13931762
 ] 

ASF subversion and git services commented on LUCENE-5518:
-

Commit 1576738 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1576738 ]

LUCENE-5518: minor hunspell optimizations

 minor hunspell optimizations
 

 Key: LUCENE-5518
 URL: https://issues.apache.org/jira/browse/LUCENE-5518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5518.patch, LUCENE-5518.patch


 After benchmarking indexing speed on SOLR-3245, I ran a profiler and a couple 
 things stood out.
 There are other things I want to improve too, but these almost double the 
 speed for many dictionaries.
 * Hunspell supports two-stage affix stripping, but the vast majority of 
 dictionaries don't have any affixes that support it. So we just add a boolean 
 (Dictionary.twoStageAffix) that is false until we see one.
 * We use java.util.regex.Pattern for condition checks. This is slow, I 
 switched to o.a.l.automaton and its much faster, and uses slightly less RAM 
 too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5518) minor hunspell optimizations

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931767#comment-13931767
 ] 

ASF subversion and git services commented on LUCENE-5518:
-

Commit 1576739 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576739 ]

LUCENE-5518: minor hunspell optimizations

 minor hunspell optimizations
 

 Key: LUCENE-5518
 URL: https://issues.apache.org/jira/browse/LUCENE-5518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5518.patch, LUCENE-5518.patch


 After benchmarking indexing speed on SOLR-3245, I ran a profiler and a couple 
 things stood out.
 There are other things I want to improve too, but these almost double the 
 speed for many dictionaries.
 * Hunspell supports two-stage affix stripping, but the vast majority of 
 dictionaries don't have any affixes that support it. So we just add a boolean 
 (Dictionary.twoStageAffix) that is false until we see one.
 * We use java.util.regex.Pattern for condition checks. This is slow, I 
 switched to o.a.l.automaton and its much faster, and uses slightly less RAM 
 too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5419) Solr Admin UI Query Result Does Nothing at Error

2014-03-12 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) updated SOLR-5419:


Attachment: Screen Shot 2014-03-12 at 2.53.50 PM.png

Furkan, if i'm not mistaken, the patch doesn't change anything? since it 
actually is triggered on the {{complete}}-event, it covers the {{success}} as 
well as the {{error}} case. all the defined {{content_generator}} depend on 
{{xhr.responseText}} and highlighting is only applied if the response is 
successful - so it basically does that already.

running trunk in fact does what you say it doesn't? see the attached 
screenshot, using r1576737 therefore - w/o your patch?

i'd guess you see something on your browsers console bar in case the Loading 
.. isn't removed?

 Solr Admin UI Query Result Does Nothing at Error
 

 Key: SOLR-5419
 URL: https://issues.apache.org/jira/browse/SOLR-5419
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.5.1, 4.6, 4.6.1, 4.7
Reporter: Furkan KAMACI
Priority: Minor
 Fix For: 4.8

 Attachments: SOLR-5419.patch, Screen Shot 2014-03-12 at 2.53.50 PM.png


 When you make a query into Solr via Solr Admin Page and if an error occurs 
 there writes Loading.. and nothing happens. 
 i.e. if you write an invalid Request Handler (something like /select 
 instead of /select) at Query page and even response is 404 Not Found you will 
 see that Loading... is still there and you will not able to understand 
 whether an error occurred or the response is so slow at first glance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5852) Add CloudSolrServer helper method to connect to a ZK ensemble

2014-03-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5852:


Attachment: SOLR-5852.patch

Simple Patch.

- Adds Javadocs to the current constructor to detail on how to connect to a ZK 
ensemble 
- Adds another constructor which takes an list of servers and converts them 
into a comma separated list of servers.

 Add CloudSolrServer helper method to connect to a ZK ensemble
 -

 Key: SOLR-5852
 URL: https://issues.apache.org/jira/browse/SOLR-5852
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
 Attachments: SOLR-5852.patch


 We should have a CloudSolrServer constructor which takes a list of ZK servers 
 to connect to.
 Something Like 
 {noformat}
 public CloudSolrServer(String... zkHost);
 {noformat}
 - Document the current constructor better to mention that to connect to a ZK 
 ensemble you can pass a comma-delimited list of ZK servers like 
 zk1:2181,zk2:2181,zk3:2181
 - Thirdly should getLbServer() and getZKStatereader() be public?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4072) CharFilter that Unicode-normalizes input

2014-03-12 Thread David Goldfarb (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Goldfarb updated LUCENE-4072:
---

Attachment: 4072.patch

Attaching a new patch. All tests pass. 

I'm using Normalizer2.isInert to check if we need to keep reading to the input 
buffer since it doesn't return false positives, even though it's not as fast as 
.hasBoundaryBefore().

 CharFilter that Unicode-normalizes input
 

 Key: LUCENE-4072
 URL: https://issues.apache.org/jira/browse/LUCENE-4072
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Ippei UKAI
 Attachments: 4072.patch, 4072.patch, DebugCode.txt, 
 LUCENE-4072.patch, LUCENE-4072.patch, LUCENE-4072.patch, LUCENE-4072.patch, 
 LUCENE-4072.patch, LUCENE-4072.patch, 
 ippeiukai-ICUNormalizer2CharFilter-4752cad.zip


 I'd like to contribute a CharFilter that Unicode-normalizes input with ICU4J.
 The benefit of having this process as CharFilter is that tokenizer can work 
 on normalised text while offset-correction ensuring fast vector highlighter 
 and other offset-dependent features do not break.
 The implementation is available at following repository:
 https://github.com/ippeiukai/ICUNormalizer2CharFilter
 Unfortunately this is my unpaid side-project and cannot spend much time to 
 merge my work to Lucene to make appropriate patch. I'd appreciate it if 
 anyone could give it a go. I'm happy to relicense it to whatever that meets 
 your needs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2298) Polish Analyzer

2014-03-12 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931793#comment-13931793
 ] 

Furkan KAMACI commented on LUCENE-2298:
---

I've detected a bug related to this issue. You can check it from here: 
LUCENE-5521

 Polish Analyzer
 ---

 Key: LUCENE-2298
 URL: https://issues.apache.org/jira/browse/LUCENE-2298
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Affects Versions: 3.1
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 3.1, 4.0-ALPHA

 Attachments: LUCENE-2298.patch, LUCENE-2298.patch, LUCENE-2298.patch, 
 stemmer_2.7z


 Andrzej Bialecki has written a Polish stemmer and provided stemming tables 
 for it under Apache License.
 You can read more about it here: http://www.getopt.org/stempel/
 In reality, the stemmer is general code and we could use it for more 
 languages too perhaps.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931797#comment-13931797
 ] 

Robert Muir commented on LUCENE-5512:
-

Thanks Furkan, I merged the patch into trunk, i found a few missing ones (e.g. 
lucene/expressions, solr map-reduce contribs) but I fixed those up.

I'll commit soon after I'm finished reviewing all the changes

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5851) Disabling lookups into disabled caches

2014-03-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931798#comment-13931798
 ] 

Otis Gospodnetic commented on SOLR-5851:


bq. Simply remove the whole cache declaration if you don't want the cache.

As in, remove it from the solrconfig.xml?  I think that was done in the case I 
showed, but I could be wrong.  I'll double-check and report.

 Disabling lookups into disabled caches
 --

 Key: SOLR-5851
 URL: https://issues.apache.org/jira/browse/SOLR-5851
 Project: Solr
  Issue Type: Improvement
Reporter: Otis Gospodnetic
Priority: Minor

 When a cache is disabled, ideally lookups into that cache should be 
 completely disabled, too.
 See: 
 http://search-lucene.com/m/QTPaTfMT52subj=Disabling+lookups+into+disabled+caches



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931815#comment-13931815
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

You're welcome. I know that reviewing takes a little time :) I also planning to 
apply a patch for LUCENE-3538 whenever I have time.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931813#comment-13931813
 ] 

Uwe Schindler commented on LUCENE-5512:
---

And now you can also backport to 4.x :-)

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5514.
---

Resolution: Fixed

Committed after the vote passed!
[http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C008401cf3df4%24b6345030%24229cf090%24%40thetaphi.de%3E]

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931822#comment-13931822
 ] 

ASF subversion and git services commented on LUCENE-5512:
-

Commit 1576755 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1576755 ]

LUCENE-5512: remove redundant typing (diamond operator) in trunk

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



GSoC

2014-03-12 Thread Ivan Biggs
Hello,
My name is Ivan Biggs and I'm very interested in working with Lucene for my
Google Summer of Code Project. I've a lot of the4 relevant documentation
and currently have my eye on the issue found here:
https://issues.apache.org/jira/browse/LUCENE-466?filter=12326260jql=labels%20%3D%20gsoc2014%20AND%20status%20%3D%20Open

My only concern is that I want to be sure that this issue would be
considered adequate work for a project in of itself or if I should plan on
tackling perhaps two of these type of issues. Furthermore, if anyone could
point me in the direction of a possible future mentor, it'd be much
appreciated as I'm not quite sure why this particular issue has an assignee
listed. Also, since Apache doesn't have any sort of template or similar
guidelines for proposal submissions available, any general help or advice
as to what sort of standards I should be adhering to would be great too!

Thanks,
Ivan


[jira] [Issue Comment Deleted] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Furkan KAMACI (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Furkan KAMACI updated LUCENE-5512:
--

Comment: was deleted

(was: [~thetaphi] I can backport it to 4.x I will make a patch for it too.)

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931830#comment-13931830
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

[~thetaphi] I can backport it to 4.x I will make a patch for it too.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5853) Return status for AbstractFullDistribZkTestBase#createCollection() and friends

2014-03-12 Thread JIRA
Jan Høydahl created SOLR-5853:
-

 Summary: Return status for 
AbstractFullDistribZkTestBase#createCollection() and friends
 Key: SOLR-5853
 URL: https://issues.apache.org/jira/browse/SOLR-5853
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Jan Høydahl
 Fix For: 4.8, 5.0


Spinoff from SOLR-4470

Should use the excellent progress from SOLR-4577 and have the createCollection 
methods in the test framework return a status (currently void). This way we can 
get rid of some unnecessary and unreliable sleep's.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931674#comment-13931674
 ] 

Jan Høydahl edited comment on SOLR-4470 at 3/12/14 3:02 PM:


I think at least we should fix the sleep(1) as a separate JIRA *before* 
committing this patch.
Created SOLR-5853 for this.


was (Author: janhoy):
I think at least we should fix the sleep(1) as a separate JIRA *before* 
committing this patch.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5473:
-

Attachment: SOLR-5473-74.patch

cloudsolrservertest using external collections

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, ec2-23-20-119-52_solr.log, 
 ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5351) More Like This Handler uses only first field in mlt.fl when using stream.body

2014-03-12 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reassigned SOLR-5351:
-

Assignee: Tommaso Teofili

 More Like This Handler uses only first field in mlt.fl when using stream.body
 -

 Key: SOLR-5351
 URL: https://issues.apache.org/jira/browse/SOLR-5351
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 4.4
 Environment: Linux,Windows
Reporter: Zygmunt Wiercioch
Assignee: Tommaso Teofili
Priority: Minor

 The documentation at: http://wiki.apache.org/solr/MoreLikeThisHandler 
 indicates that one can use multiple fields for similarity in mlt.fl:
 http://localhost:8983/solr/mlt?stream.body=electronics%20memorymlt.fl=manu,catmlt.interestingTerms=listmlt.mintf=0
 In trying this, only one field is used. 
 Looking at the code, it only looks at the firs field:
  public DocListAndSet getMoreLikeThis( Reader reader, int start, int rows, 
 ListQuery filters, ListInterestingTerm terms, int flags ) throws 
 IOException
 {
   // analyzing with the first field: previous (stupid) behavior
   rawMLTQuery = mlt.like(reader, mlt.getFieldNames()[0]); 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931854#comment-13931854
 ] 

Uwe Schindler commented on LUCENE-5512:
---

[~kamaci]: backports should be done with svn merge and then committed. 
Unfortunately thats not easy to do for a non-committer. Otherwise it would be a 
separate patch, which is not ideal, because the merge information is lost.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread Shai Erera (JIRA)
Shai Erera created LUCENE-5522:
--

 Summary: FacetConfig doesn't add drill-down terms for facet 
associations
 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0


I bumped into this while updating my examples code. Will attach a patch which 
fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5522:
---

Attachment: LUCENE-5522.patch

Patch fixes FacetsConfig and adds tests to both demo and facet packages.

 FacetConfig doesn't add drill-down terms for facet associations
 ---

 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5522.patch


 I bumped into this while updating my examples code. Will attach a patch which 
 fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931879#comment-13931879
 ] 

Michael McCandless commented on LUCENE-5522:


+1, thanks Shai!

 FacetConfig doesn't add drill-down terms for facet associations
 ---

 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5522.patch


 I bumped into this while updating my examples code. Will attach a patch which 
 fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5519:


Attachment: LUCENE-5519.patch

here is a new patch moving closer to TopDocs again.I don't use an array in the 
TopResults since it's a pretty useless conversion and I implemented Iterable 
such that we can directly use it in a foreach loop.  I think it's ready. 

 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5749) Implement an Overseer status API

2014-03-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5749:


Attachment: SOLR-5749.patch

This adds /admin/collections?action=OVERSEERSTATUS API. Stats added are:
# success and error counts
# queue sizes for overseer, overseer work queue and overseer collection queue
# various timing statistics per operation type

I'm still working on the tests.

 Implement an Overseer status API
 

 Key: SOLR-5749
 URL: https://issues.apache.org/jira/browse/SOLR-5749
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: SOLR-5749.patch


 Right now there is little to no information exposed about the overseer from 
 SolrCloud.
 I propose that we have an API for overseer status which can return:
 # Past N commands executed (grouped by command type)
 # Status (queue-size, current overseer leader node)
 # Overseer log



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931887#comment-13931887
 ] 

ASF subversion and git services commented on LUCENE-5522:
-

Commit 1576790 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1576790 ]

LUCENE-5522: FacetConfig doesn't add drill-down terms for facet associations

 FacetConfig doesn't add drill-down terms for facet associations
 ---

 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5522.patch


 I bumped into this while updating my examples code. Will attach a patch which 
 fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5749) Implement an Overseer status API

2014-03-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931891#comment-13931891
 ] 

Mark Miller commented on SOLR-5749:
---

Nice Shalin!

 Implement an Overseer status API
 

 Key: SOLR-5749
 URL: https://issues.apache.org/jira/browse/SOLR-5749
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: SOLR-5749.patch


 Right now there is little to no information exposed about the overseer from 
 SolrCloud.
 I propose that we have an API for overseer status which can return:
 # Past N commands executed (grouped by command type)
 # Status (queue-size, current overseer leader node)
 # Overseer log



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5749) Implement an Overseer status API

2014-03-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931901#comment-13931901
 ] 

Shalin Shekhar Mangar commented on SOLR-5749:
-

Here's how it looks right now:
http://localhost:8983/solr/admin/collections?action=overseerstatus
{code:xml}
?xml version=1.0 encoding=UTF-8?
response
  lst name=responseHeader
int name=status0/int
int name=QTime26/int
  /lst
  str name=leader192.168.1.3:8983_solr/str
  int name=overseer_queue_size0/int
  int name=overseer_work_queue_size0/int
  int name=overseer_collection_queue_size2/int
  lst name=stats
lst name=leader
  int name=requests4/int
  int name=errors0/int
  double name=totalTime0.599/double
  double name=avgRequestsPerSecond0.07359325662045857/double
  double name=5minRateReqsPerSecond0.3504682187309409/double
  double name=15minRateReqsPerSecond0.38265912794758644/double
  double name=avgTimePerRequest0.14975/double
  double name=medianRequestTime0.1395/double
  double name=75thPcRequestTime0.179/double
  double name=95thPcRequestTime0.19/double
  double name=99thPcRequestTime0.19/double
  double name=999thPcRequestTime0.19/double
/lst
lst name=state
  int name=requests4/int
  int name=errors0/int
  double name=totalTime8.589/double
  double name=avgRequestsPerSecond0.06929964428146092/double
  double name=5minRateReqsPerSecond0.3504682187309409/double
  double name=15minRateReqsPerSecond0.38265912794758644/double
  double name=avgTimePerRequest2.14725/double
  double name=medianRequestTime0.8644/double
  double name=75thPcRequestTime5.18075/double
  double name=95thPcRequestTime6.531/double
  double name=99thPcRequestTime6.531/double
  double name=999thPcRequestTime6.531/double
/lst
  /lst
/response
{code}

 Implement an Overseer status API
 

 Key: SOLR-5749
 URL: https://issues.apache.org/jira/browse/SOLR-5749
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: SOLR-5749.patch


 Right now there is little to no information exposed about the overseer from 
 SolrCloud.
 I propose that we have an API for overseer status which can return:
 # Past N commands executed (grouped by command type)
 # Status (queue-size, current overseer leader node)
 # Overseer log



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931907#comment-13931907
 ] 

ASF subversion and git services commented on LUCENE-5522:
-

Commit 1576797 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576797 ]

LUCENE-5522: FacetConfig doesn't add drill-down terms for facet associations

 FacetConfig doesn't add drill-down terms for facet associations
 ---

 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5522.patch


 I bumped into this while updating my examples code. Will attach a patch which 
 fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5522) FacetConfig doesn't add drill-down terms for facet associations

2014-03-12 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-5522.


Resolution: Fixed

Committed to trunk and 4x.

 FacetConfig doesn't add drill-down terms for facet associations
 ---

 Key: LUCENE-5522
 URL: https://issues.apache.org/jira/browse/LUCENE-5522
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5522.patch


 I bumped into this while updating my examples code. Will attach a patch which 
 fixes this shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5749) Implement an Overseer status API

2014-03-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931919#comment-13931919
 ] 

Shalin Shekhar Mangar commented on SOLR-5749:
-

Thanks Mark. 

[~tim.potter] - I didn't use the metrics APIs (that's a big issue!) but you'll 
find that all of your demands are met by this patch.

I think we should rename stats to operations and have the timing done per 
minute instead of per-second since Overseer operations are not that frequent. I 
am working on capturing the past N operations and past N failures (exceptions) 
per operation to the stats. Right now the stats are in-memory which means that 
we lose them if the overseer dies. I think that we should periodically, say 
every 15 minutes, save the stats to ZK and initialize the stats from ZK when a 
new Overseer starts.

 Implement an Overseer status API
 

 Key: SOLR-5749
 URL: https://issues.apache.org/jira/browse/SOLR-5749
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: SOLR-5749.patch


 Right now there is little to no information exposed about the overseer from 
 SolrCloud.
 I propose that we have an API for overseer status which can return:
 # Past N commands executed (grouped by command type)
 # Status (queue-size, current overseer leader node)
 # Overseer log



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5851) Disabling lookups into disabled caches

2014-03-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931933#comment-13931933
 ] 

Otis Gospodnetic commented on SOLR-5851:


False alarm, it seems.  I thought we had commented out the cache in this case, 
but we just set its values to 0.  Sure, still weird that something changes the 
size to 2, but the original problem I wanted to raise is not really a problem - 
to prevent lookups from happening at all one just needs to comment out the 
cache definition.  At least for Document Cache.  I didn't check the other 
caches, but I would imagine/hope Solr handles them the same.

Should I Won't Fix this?


 Disabling lookups into disabled caches
 --

 Key: SOLR-5851
 URL: https://issues.apache.org/jira/browse/SOLR-5851
 Project: Solr
  Issue Type: Improvement
Reporter: Otis Gospodnetic
Priority: Minor

 When a cache is disabled, ideally lookups into that cache should be 
 completely disabled, too.
 See: 
 http://search-lucene.com/m/QTPaTfMT52subj=Disabling+lookups+into+disabled+caches



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931957#comment-13931957
 ] 

ASF subversion and git services commented on LUCENE-5514:
-

Commit 1576812 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576812 ]

LUCENE-5514: Update bytecode version

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java7
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).
 Here is the vote thread: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C02be01cf3ae9%24e3735090%24aa59f1b0%24%40thetaphi.de%3E]
 Preliminary result: 
 [http://mail-archives.apache.org/mod_mbox/lucene-dev/201403.mbox/%3C001001cf3c45%248d2adc00%24a7809400%24%40thetaphi.de%3E]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5429) Run one search across multiple scorers/collectors

2014-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5429:
---

Attachment: LUCENE-5429.patch

This is a patch against 4.3.1 (small changes were required because SimScorer 
API changes); it also includes the QueryRescorer (LUCENE-5489).

 Run one search across multiple scorers/collectors
 -

 Key: LUCENE-5429
 URL: https://issues.apache.org/jira/browse/LUCENE-5429
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-5429.patch, LUCENE-5429.patch, LUCENE-5429.patch


 I'm looking into the possibility of running the same search across many 
 scorers, so that decoding postings lists / doing union and intersect are done 
 once, but scoring via Similarity can be done multiple times for each it (and 
 the results collected into separate collectors).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931981#comment-13931981
 ] 

Michael McCandless commented on LUCENE-5519:


+1, looks great.  Thanks Simon!

 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: GSoC

2014-03-12 Thread Michael McCandless
Hi Ivan,

It's best to just add a comment onto LUCENE-466 with your
ideas/questions specific to that issue; other more general questions
should be sent to this dev list.

Since the big part of that issue (supporting minShouldMatch in
BooleanQuery) was already done, I think fixing query parsers to handle
it is important but isn't an entire GSoC project?  Or, perhaps it is
(we have quite a few query parsers now...).  But I think doing another
improvement in addition would be the right amount...

The mentor assignment is somewhat ad-hoc, sort of like dating ;)  You
should add comments to the issue, adding ideas, asking for
suggestions, asking if anyone will mentor, and then see if any
possible mentors respond.  I'm not sure why the issue is assigned to
Yonik; I don't think he's actually working on it.

You could try looking at past GSoC proposals at Apache Lucene to get an idea?

Mike McCandless

http://blog.mikemccandless.com


On Wed, Mar 12, 2014 at 10:40 AM, Ivan Biggs
ivan.c.bi...@vanderbilt.edu wrote:
 Hello,
 My name is Ivan Biggs and I'm very interested in working with Lucene for my
 Google Summer of Code Project. I've a lot of the4 relevant documentation and
 currently have my eye on the issue found here:
 https://issues.apache.org/jira/browse/LUCENE-466?filter=12326260jql=labels%20%3D%20gsoc2014%20AND%20status%20%3D%20Open

 My only concern is that I want to be sure that this issue would be
 considered adequate work for a project in of itself or if I should plan on
 tackling perhaps two of these type of issues. Furthermore, if anyone could
 point me in the direction of a possible future mentor, it'd be much
 appreciated as I'm not quite sure why this particular issue has an assignee
 listed. Also, since Apache doesn't have any sort of template or similar
 guidelines for proposal submissions available, any general help or advice as
 to what sort of standards I should be adhering to would be great too!

 Thanks,
 Ivan

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-466) Need QueryParser support for BooleanQuery.minNrShouldMatch

2014-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-466:
--

Assignee: (was: Yonik Seeley)

 Need QueryParser support for BooleanQuery.minNrShouldMatch
 --

 Key: LUCENE-466
 URL: https://issues.apache.org/jira/browse/LUCENE-466
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
 Environment: Operating System: other
 Platform: Other
Reporter: Mark Harwood
Priority: Minor
  Labels: gsoc2014

 Attached 2 new classes:
 1) CoordConstrainedBooleanQuery
 A boolean query that only matches if a specified number of the contained 
 clauses
 match. An example use might be a query that returns a list of books where ANY 
 2
 people from a list of people were co-authors, eg:
 Lucene In Action would match (Erik Hatcher Otis Gospodneti#263; Mark 
 Harwood
 Doug Cutting) with a minRequiredOverlap of 2 because Otis and Erik wrote 
 that.
 The book Java Development with Ant would not match because only 1 element in
 the list (Erik) was selected.
 2) CustomQueryParserExample
 A customised QueryParser that allows definition of
 CoordConstrainedBooleanQueries. The solution (mis)uses fieldnames to pass
 parameters to the custom query.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932034#comment-13932034
 ] 

ASF subversion and git services commented on LUCENE-5519:
-

Commit 1576825 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1576825 ]

LUCENE-5519: Make queueDepth enforcing optional in TopNSearcher

 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: GSoC

2014-03-12 Thread Ivan Biggs
First, thanks so much for getting me pointed in the right direction! I
assume you mean straight on Jira? Also do you have any clue where one would
be able to find past proposals for Lucene?
Thanks,
Ivan


On Wed, Mar 12, 2014 at 12:08 PM, Michael McCandless 
luc...@mikemccandless.com wrote:

 Hi Ivan,

 It's best to just add a comment onto LUCENE-466 with your
 ideas/questions specific to that issue; other more general questions
 should be sent to this dev list.

 Since the big part of that issue (supporting minShouldMatch in
 BooleanQuery) was already done, I think fixing query parsers to handle
 it is important but isn't an entire GSoC project?  Or, perhaps it is
 (we have quite a few query parsers now...).  But I think doing another
 improvement in addition would be the right amount...

 The mentor assignment is somewhat ad-hoc, sort of like dating ;)  You
 should add comments to the issue, adding ideas, asking for
 suggestions, asking if anyone will mentor, and then see if any
 possible mentors respond.  I'm not sure why the issue is assigned to
 Yonik; I don't think he's actually working on it.

 You could try looking at past GSoC proposals at Apache Lucene to get an
 idea?

 Mike McCandless

 http://blog.mikemccandless.com


 On Wed, Mar 12, 2014 at 10:40 AM, Ivan Biggs
 ivan.c.bi...@vanderbilt.edu wrote:
  Hello,
  My name is Ivan Biggs and I'm very interested in working with Lucene for
 my
  Google Summer of Code Project. I've a lot of the4 relevant documentation
 and
  currently have my eye on the issue found here:
 
 https://issues.apache.org/jira/browse/LUCENE-466?filter=12326260jql=labels%20%3D%20gsoc2014%20AND%20status%20%3D%20Open
 
  My only concern is that I want to be sure that this issue would be
  considered adequate work for a project in of itself or if I should plan
 on
  tackling perhaps two of these type of issues. Furthermore, if anyone
 could
  point me in the direction of a possible future mentor, it'd be much
  appreciated as I'm not quite sure why this particular issue has an
 assignee
  listed. Also, since Apache doesn't have any sort of template or similar
  guidelines for proposal submissions available, any general help or
 advice as
  to what sort of standards I should be adhering to would be great too!
 
  Thanks,
  Ivan

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





[jira] [Commented] (SOLR-5477) Async execution of OverseerCollectionProcessor tasks

2014-03-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932043#comment-13932043
 ] 

Mark Miller commented on SOLR-5477:
---

+1

 Async execution of OverseerCollectionProcessor tasks
 

 Key: SOLR-5477
 URL: https://issues.apache.org/jira/browse/SOLR-5477
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Anshum Gupta
 Attachments: SOLR-5477-CoreAdminStatus.patch, 
 SOLR-5477-updated.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch


 Typical collection admin commands are long running and it is very common to 
 have the requests get timed out.  It is more of a problem if the cluster is 
 very large.Add an option to run these commands asynchronously
 add an extra param async=true for all collection commands
 the task is written to ZK and the caller is returned a task id. 
 as separate collection admin command will be added to poll the status of the 
 task
 command=statusid=7657668909
 if id is not passed all running async tasks should be listed
 A separate queue is created to store in-process tasks . After the tasks are 
 completed the queue entry is removed. OverSeerColectionProcessor will perform 
 these tasks in multiple threads



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: GSoC

2014-03-12 Thread Michael McCandless
Sorry, yes, please add comments/ideas straight on the Jira issue, i.e.
https://issues.apache.org/jira/browse/LUCENE-466 in this case.

Hmm, I'm not sure how to find past proposals.  The links to these
proposals, e.g. from my past blog post, and from past Jira issues,
seem to be broken now.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Mar 12, 2014 at 1:25 PM, Ivan Biggs ivan.c.bi...@vanderbilt.edu wrote:
 First, thanks so much for getting me pointed in the right direction! I
 assume you mean straight on Jira? Also do you have any clue where one would
 be able to find past proposals for Lucene?
 Thanks,
 Ivan


 On Wed, Mar 12, 2014 at 12:08 PM, Michael McCandless
 luc...@mikemccandless.com wrote:

 Hi Ivan,

 It's best to just add a comment onto LUCENE-466 with your
 ideas/questions specific to that issue; other more general questions
 should be sent to this dev list.

 Since the big part of that issue (supporting minShouldMatch in
 BooleanQuery) was already done, I think fixing query parsers to handle
 it is important but isn't an entire GSoC project?  Or, perhaps it is
 (we have quite a few query parsers now...).  But I think doing another
 improvement in addition would be the right amount...

 The mentor assignment is somewhat ad-hoc, sort of like dating ;)  You
 should add comments to the issue, adding ideas, asking for
 suggestions, asking if anyone will mentor, and then see if any
 possible mentors respond.  I'm not sure why the issue is assigned to
 Yonik; I don't think he's actually working on it.

 You could try looking at past GSoC proposals at Apache Lucene to get an
 idea?

 Mike McCandless

 http://blog.mikemccandless.com


 On Wed, Mar 12, 2014 at 10:40 AM, Ivan Biggs
 ivan.c.bi...@vanderbilt.edu wrote:
  Hello,
  My name is Ivan Biggs and I'm very interested in working with Lucene for
  my
  Google Summer of Code Project. I've a lot of the4 relevant documentation
  and
  currently have my eye on the issue found here:
 
  https://issues.apache.org/jira/browse/LUCENE-466?filter=12326260jql=labels%20%3D%20gsoc2014%20AND%20status%20%3D%20Open
 
  My only concern is that I want to be sure that this issue would be
  considered adequate work for a project in of itself or if I should plan
  on
  tackling perhaps two of these type of issues. Furthermore, if anyone
  could
  point me in the direction of a possible future mentor, it'd be much
  appreciated as I'm not quite sure why this particular issue has an
  assignee
  listed. Also, since Apache doesn't have any sort of template or similar
  guidelines for proposal submissions available, any general help or
  advice as
  to what sort of standards I should be adhering to would be great too!
 
  Thanks,
  Ivan

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2014-03-12 Thread Sunny Khatri (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932048#comment-13932048
 ] 

Sunny Khatri commented on LUCENE-4258:
--

Hi Guys,

I've been looking at this patch and wanted to know if there's any update on the 
release date for this patch.

I was able to try out this patch and observed some issues regarding the term 
offsets for the stacked up segment data. It seems like when a new update is 
made on top of the stack (Operation.ADD_FIELDS), their offsets begins back from 
0. For example (and a use case) : Let a document be { term1 term2 term3 term4 
term5}. Now we send the whole document in multiple chunks. 
Update 1: term1 term2 term3
Update 2: term4 term5

Now the stack looks like (along with their positions):
term4:::0 term5:::1
term1:::0 term2:::1 term3:::2

So what we end up getting is two terms appearing at position 0, two on 
position1 etc.
CONS: Phrase queries, etc, won't work in this case, for instance, as search for 
term3 term4. 

Just wanted to have a take from you guys to see if that issue could be resolved 
easily ? 





 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
 Fix For: 4.7

 Attachments: IncrementalFieldUpdates.odp, 
 LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
 LUCENE-4258.branch.2.patch, LUCENE-4258.branch.4.patch, 
 LUCENE-4258.branch.5.patch, LUCENE-4258.branch.6.patch, 
 LUCENE-4258.branch.6.patch, LUCENE-4258.branch3.patch, 
 LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
 LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
 LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch

   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-466) Need QueryParser support for BooleanQuery.minNrShouldMatch

2014-03-12 Thread Ivan Biggs (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932049#comment-13932049
 ] 

Ivan Biggs commented on LUCENE-466:
---

Hello,
My name is Ivan Biggs and I'm very interested in working with Lucene for my 
Google Summer of Code Project. I've read  a lot of the relevant documentation 
and currently have my eye on this issue, however It is my understanding that if 
I work on this, I  should likely work on another more minor issue in addition 
to constitute a more suitable work load this summer.

If anyone has any interest in mentoring, giving relevant ideas, suggesting 
another related issue, or generally giving me an idea of what sort of proposal 
Apache would be looking for it'd be greatly appreciated.

Thanks,
Ivan

 Need QueryParser support for BooleanQuery.minNrShouldMatch
 --

 Key: LUCENE-466
 URL: https://issues.apache.org/jira/browse/LUCENE-466
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
 Environment: Operating System: other
 Platform: Other
Reporter: Mark Harwood
Priority: Minor
  Labels: gsoc2014

 Attached 2 new classes:
 1) CoordConstrainedBooleanQuery
 A boolean query that only matches if a specified number of the contained 
 clauses
 match. An example use might be a query that returns a list of books where ANY 
 2
 people from a list of people were co-authors, eg:
 Lucene In Action would match (Erik Hatcher Otis Gospodneti#263; Mark 
 Harwood
 Doug Cutting) with a minRequiredOverlap of 2 because Otis and Erik wrote 
 that.
 The book Java Development with Ant would not match because only 1 element in
 the list (Erik) was selected.
 2) CustomQueryParserExample
 A customised QueryParser that allows definition of
 CoordConstrainedBooleanQueries. The solution (mis)uses fieldnames to pass
 parameters to the custom query.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2014-03-12 Thread Sunny Khatri (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932048#comment-13932048
 ] 

Sunny Khatri edited comment on LUCENE-4258 at 3/12/14 5:39 PM:
---

Hi Guys,

I've been looking at this patch and wanted to know if there's any update on the 
release date for this patch.

I was able to try out this patch and observed some issues regarding the term 
offsets for the stacked up segment data. It seems like when a new update is 
made on top of the stack (Operation.ADD_FIELDS), their offsets begins back from 
0. For example (and a use case) : Let a document be { term1 term2 term3 term4 
term5}. Now we send the whole document in multiple chunks. 
Update 1: term1 term2 term3
Update 2: term4 term5

Now the stack looks like (along with their positions):
term4:::0 term5:::1
term1:::0 term2:::1 term3:::2

So what we end up getting is two terms appearing at position 0, two on 
position1 etc.
CONS: Phrase queries, etc, won't work in this case, for instance, as search for 
term3 term4. 

Just wanted to have a take from you guys to see if that issue could be resolved 
easily ? 

PS: Not sure if it's trivial to resolve that as we'll need to know the max 
length of the actual document chunk in the previous stack, and not the max 
position of the last term added to the stack, as last term in the actual doc 
could be a stopword, hence won't appear in the index, based on the 
configuration.  




was (Author: sunnyk):
Hi Guys,

I've been looking at this patch and wanted to know if there's any update on the 
release date for this patch.

I was able to try out this patch and observed some issues regarding the term 
offsets for the stacked up segment data. It seems like when a new update is 
made on top of the stack (Operation.ADD_FIELDS), their offsets begins back from 
0. For example (and a use case) : Let a document be { term1 term2 term3 term4 
term5}. Now we send the whole document in multiple chunks. 
Update 1: term1 term2 term3
Update 2: term4 term5

Now the stack looks like (along with their positions):
term4:::0 term5:::1
term1:::0 term2:::1 term3:::2

So what we end up getting is two terms appearing at position 0, two on 
position1 etc.
CONS: Phrase queries, etc, won't work in this case, for instance, as search for 
term3 term4. 

Just wanted to have a take from you guys to see if that issue could be resolved 
easily ? 





 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
 Fix For: 4.7

 Attachments: IncrementalFieldUpdates.odp, 
 LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, 
 LUCENE-4258.branch.2.patch, LUCENE-4258.branch.4.patch, 
 LUCENE-4258.branch.5.patch, LUCENE-4258.branch.6.patch, 
 LUCENE-4258.branch.6.patch, LUCENE-4258.branch3.patch, 
 LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, 
 LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, 
 LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch

   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5854) facet.limit can limit the output of facet.pivot when facet.sort is on

2014-03-12 Thread Gennaro Frazzingaro (JIRA)
Gennaro Frazzingaro created SOLR-5854:
-

 Summary: facet.limit can limit the output of facet.pivot when 
facet.sort is on
 Key: SOLR-5854
 URL: https://issues.apache.org/jira/browse/SOLR-5854
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4
Reporter: Gennaro Frazzingaro


Given the query
{code}
{
facet:true,
facet.pivot:field1,field2,
facet.pivot.mincount:1,
facet.sort:field1 asc, field2 asc,
q:,
rows:1000,
start:0,
}
{code}

not all results are returned.
Removing facet.sort or setting facet.limit=-1 corrects the problem



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5512.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

Thanks Furkan!

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932106#comment-13932106
 ] 

ASF subversion and git services commented on LUCENE-5512:
-

Commit 1576837 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576837 ]

LUCENE-5512: remove redundant typing (diamond operator) in trunk

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5512.patch, LUCENE-5512.patch, LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Willem Salembier (JIRA)
Willem Salembier created LUCENE-5523:


 Summary: MemoryIndex.addField violates TokenStream contract
 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier


Running the example from the javadoc page generates a IllegalStateException.

http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html

{code}
java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
contract violation: reset()/close() call missing, reset() called multiple 
times, or subclass does not call super.reset(). Please see Javadocs of 
TokenStream class for more information about the correct consuming workflow.
at 
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
at 
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
reset()/close() call missing, reset() called multiple times, or subclass does 
not call super.reset(). Please see Javadocs of TokenStream class for more 
information about the correct consuming workflow.
at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
at 
org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
at 
org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
at 
org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
at 
org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
at 
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
... 28 more
{code}

Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932135#comment-13932135
 ] 

Uwe Schindler commented on LUCENE-5523:
---

Hi,

can you post your Analyzer definition (which TokenFilters and Tokenizers) do 
you use? In most cases, a broken custom TokenFilter is causing this. Do you 
have any TokenFilters or a Tokenizer that was written by you?

Does the same TokenStream work with Lucene's standard IndexWriter?

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932135#comment-13932135
 ] 

Uwe Schindler edited comment on LUCENE-5523 at 3/12/14 6:30 PM:


Hi,

can you post your Analyzer definition (which TokenFilters and Tokenizers do you 
use)? In most cases, a broken custom TokenFilter is causing this. Do you have 
any TokenFilters or a Tokenizer that was written by you?

Does the same TokenStream work with Lucene's standard IndexWriter?


was (Author: thetaphi):
Hi,

can you post your Analyzer definition (which TokenFilters and Tokenizers) do 
you use? In most cases, a broken custom TokenFilter is causing this. Do you 
have any TokenFilters or a Tokenizer that was written by you?

Does the same TokenStream work with Lucene's standard IndexWriter?

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: 

[jira] [Commented] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Willem Salembier (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932151#comment-13932151
 ] 

Willem Salembier commented on LUCENE-5523:
--

These is the complete code, taken from javadoc

{code:java}
Version version = Version.LUCENE_47;
Analyzer analyzer = new SimpleAnalyzer(version);
MemoryIndex index = new MemoryIndex();
index.addField(content, Readings about Salmons and other select 
Alaska fishing Manuals, analyzer);
index.addField(author, Tales of James, analyzer);
QueryParser parser = new QueryParser(version, content, analyzer);
float score = index.search(parser.parse(+author:james +salmon~ +fish* 
manual~));
if (score  0.0f) {
System.out.println(it's a match);
} else {
System.out.println(no match found);
}
System.out.println(indexData= + index.toString());
{code}

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932162#comment-13932162
 ] 

Uwe Schindler commented on LUCENE-5523:
---

Hi,
we are a little bit confused about your version numbers.
Which Lucene version are you using? Lucene 3.6.1 does not have any TokenStream 
contract checks. Those were introduced in Lucene 4.6.
Maybe you have some classpath thats mixed up and has different versions of 
Lucene in it?

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932168#comment-13932168
 ] 

Uwe Schindler commented on LUCENE-5523:
---

bq. {code:java}Version version = Version.LUCENE_47;{code}

This suggests that you are using Lucene 4.7. But The line numbers in your stack 
trace are not from that version. Line 429 of MemoryIndex points to 
incrementToken() only in earlier versions.

This makes me think, that you have different versions of JAR files in your 
classpath (e.g. a newer version of the analyzers module) than the memoryindex 
module. In that case it might happen that MemryIndex hits this problem, because 
earlier versions did not consume TokenStreams correctly (reset() call missing)..

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5523.
---

Resolution: Cannot Reproduce
  Assignee: Uwe Schindler

I checked your example with Lucene 4.7 and also Lucene 4.6. In both cases, test 
succeeds. So you have for sure a broken classpath mixing different Lucene 
versions.

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6.1
Reporter: Willem Salembier
Assignee: Uwe Schindler

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5853) Return status for AbstractFullDistribZkTestBase#createCollection() and friends

2014-03-12 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932190#comment-13932190
 ] 

Alan Woodward commented on SOLR-5853:
-

As an alternative, maybe we should nuke #createCollection() and friends, and 
use the SolrJ APIs instead?  And if those APIs aren't up to scratch, improve 
them till they are?  Would be a good way of eating our own dogfood, plus it 
ensures that we're testing what users will actually use.

 Return status for AbstractFullDistribZkTestBase#createCollection() and friends
 --

 Key: SOLR-5853
 URL: https://issues.apache.org/jira/browse/SOLR-5853
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Jan Høydahl
 Fix For: 4.8, 5.0


 Spinoff from SOLR-4470
 Should use the excellent progress from SOLR-4577 and have the 
 createCollection methods in the test framework return a status (currently 
 void). This way we can get rid of some unnecessary and unreliable sleep's.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5523:
--

Affects Version/s: (was: 3.6.1)
   4.7
   4.6

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.6, 4.7
Reporter: Willem Salembier
Assignee: Uwe Schindler

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5519) Make queueDepth enforcing optional in TopNSearcher

2014-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932210#comment-13932210
 ] 

ASF subversion and git services commented on LUCENE-5519:
-

Commit 1576860 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1576860 ]

LUCENE-5519: Make queueDepth enforcing optional in TopNSearcher

 Make queueDepth enforcing optional in TopNSearcher
 --

 Key: LUCENE-5519
 URL: https://issues.apache.org/jira/browse/LUCENE-5519
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.7
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5519.patch, LUCENE-5519.patch, LUCENE-5519.patch


 currently TopNSearcher enforces the maxQueueSize based on rejectedCount + 
 topN. I have a usecase where I just simply don't know the exact limit and I 
 am ok with a top N that is not 100% exact. Yet, if I don't specify the right 
 upper limit for the queue size I get an assertion error when I run tests but 
 the only workaround it to make the queue unbounded which looks odd while it 
 would possibly work just fine. I think it's fair to add an option that just 
 doesn't enforce the limit and if it shoudl be enforced we throw a real 
 exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5523) MemoryIndex.addField violates TokenStream contract

2014-03-12 Thread Willem Salembier (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932211#comment-13932211
 ] 

Willem Salembier commented on LUCENE-5523:
--

I had indeed a transitive dependency on lucene-analyzers-common version 4.4.0 
imported via the graph database titan.I didn't realize that. Thanks for your 
support. 

 MemoryIndex.addField violates TokenStream contract
 --

 Key: LUCENE-5523
 URL: https://issues.apache.org/jira/browse/LUCENE-5523
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.6, 4.7
Reporter: Willem Salembier
Assignee: Uwe Schindler

 Running the example from the javadoc page generates a IllegalStateException.
 http://lucene.apache.org/core/4_7_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
 {code}
 java.lang.RuntimeException: java.lang.IllegalStateException: TokenStream 
 contract violation: reset()/close() call missing, reset() called multiple 
 times, or subclass does not call super.reset(). Please see Javadocs of 
 TokenStream class for more information about the correct consuming workflow.
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:463)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:298)
   at be.curtaincall.provisioning.SearchTest.testSearch(SearchTest.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
   at 
 com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:77)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:195)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:63)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Caused by: java.lang.IllegalStateException: TokenStream contract violation: 
 reset()/close() call missing, reset() called multiple times, or subclass does 
 not call super.reset(). Please see Javadocs of TokenStream class for more 
 information about the correct consuming workflow.
   at org.apache.lucene.analysis.Tokenizer$1.read(Tokenizer.java:110)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.readFully(CharacterUtils.java:213)
   at 
 org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:255)
   at 
 org.apache.lucene.analysis.util.CharacterUtils.fill(CharacterUtils.java:203)
   at 
 org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:135)
   at 
 org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:429)
   ... 28 more
 {code}
 Also tested in 3.7.0, but version not yet created in JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4408) Server hanging on startup

2014-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13932215#comment-13932215
 ] 

Ronny Næss commented on SOLR-4408:
--

I have experienced my Tomcat instance to get stuck when deploying solr.war 
after Tomcat restart. 
INFO: Deploying web application archive 
/usr/local/Cellar/tomcat/7.0.52/libexec/webapps/solr.war

The Tomcat startet and deployed Solr after setting spellcheck.collate=false
Setting useColdSearcher=true also works. Tomcat starts up in blazing 4 secs 
when not doing the warm up. Normally it starts in 40-50 seconds as long as it 
is not stuck.

I am running on OSX Mavricks, JDK 1.7, Tomat 7.0.52 and Solr 4.7

 Server hanging on startup
 -

 Key: SOLR-4408
 URL: https://issues.apache.org/jira/browse/SOLR-4408
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
 Environment: OpenJDK 64-Bit Server VM (23.2-b09 mixed mode)
 Tomcat 7.0
 Eclipse Juno + WTP
Reporter: Francois-Xavier Bonnet
Assignee: Erick Erickson
 Attachments: patch-4408.txt


 While starting, the server hangs indefinitely. Everything works fine when I 
 first start the server with no index created yet but if I fill the index then 
 stop and start the server, it hangs. Could it be a lock that is never 
 released?
 Here is what I get in a full thread dump:
 2013-02-06 16:28:52
 Full thread dump OpenJDK 64-Bit Server VM (23.2-b09 mixed mode):
 searcherExecutor-4-thread-1 prio=10 tid=0x7fbdfc16a800 nid=0x42c6 in 
 Object.wait() [0x7fbe0ab1]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0xc34c1c48 (a java.lang.Object)
   at java.lang.Object.wait(Object.java:503)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
   - locked 0xc34c1c48 (a java.lang.Object)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
   at 
 org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:94)
   at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:213)
   at 
 org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:112)
   at 
 org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:203)
   at 
 org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:180)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
   at 
 org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:64)
   at org.apache.solr.core.SolrCore$5.call(SolrCore.java:1594)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 coreLoadExecutor-3-thread-1 prio=10 tid=0x7fbe04194000 nid=0x42c5 in 
 Object.wait() [0x7fbe0ac11000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0xc34c1c48 (a java.lang.Object)
   at java.lang.Object.wait(Object.java:503)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1492)
   - locked 0xc34c1c48 (a java.lang.Object)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1312)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1247)
   at 
 org.apache.solr.handler.ReplicationHandler.getIndexVersion(ReplicationHandler.java:495)
   at 
 org.apache.solr.handler.ReplicationHandler.getStatistics(ReplicationHandler.java:518)
   at 
 org.apache.solr.core.JmxMonitoredMap$SolrDynamicMBean.getMBeanInfo(JmxMonitoredMap.java:232)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:512)
   at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:140)
   at org.apache.solr.core.JmxMonitoredMap.put(JmxMonitoredMap.java:51)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:636)
   at 

  1   2   >