[jira] [Updated] (SOLR-7123) /update/json/docs should have nested document support

2015-03-24 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7123:
---
Attachment: SOLR-7123.patch

Added tests for nested documents
Changes interface

 /update/json/docs should have nested document support
 -

 Key: SOLR-7123
 URL: https://issues.apache.org/jira/browse/SOLR-7123
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-7123.patch


 It is the next logical step after SOLR-6304
 For the example document given below where the /orgs belong to a nested 
 document, 
 {code}
 {
 name: Joe Smith,
 phone: 876876687
 orgs :[ { name : Microsoft,
   city: Seattle,
   zip: 98052},
 {name: “Apple”,
  city:”Cupertino”,
  zip:95014 }
   ]
 } 
 {code}
 The extra mapping parameters would be
 {noformat}
 child.split=o:/org
 o.f=name
 o.f=city
 o.f=zip
 {noformat}
 * o is the short name for that child. It is possible to map multiple children 
 with multiple shortnames
 * In this example all the o.* paths are relative. It is possible to five 
 absolute path names such as o.f=/org/name 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7123) /update/json/docs should have nested document support

2015-03-23 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375876#comment-14375876
 ] 

Vitaliy Zhovtyuk commented on SOLR-7123:


If got right the idea of nested document the result of such parse will be list 
of rows. 
Row values represented as map where nested documents node org is item in map, 
values of map will be list of maps containing name, city and zip.
So taking into account sample from SOLR-6304 it can look like: 
[{ recipeId:001, recipeType:donut, id:1001, type:Regular , o 
: [{ name : Microsoft,
  city: Seattle,
  zip: 98052},
{name: “Apple”,
 city:”Cupertino”,
 zip:95014 }]}]
 So accordingly to parent split behaviour we transform nested documents with 
child split same way.

 /update/json/docs should have nested document support
 -

 Key: SOLR-7123
 URL: https://issues.apache.org/jira/browse/SOLR-7123
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse

 It is the next logical step after SOLR-6304
 For the example document given below where the /orgs belong to a nested 
 document, 
 {code}
 {
 name: Joe Smith,
 phone: 876876687
 orgs :[ { name : Microsoft,
   city: Seattle,
   zip: 98052},
 {name: “Apple”,
  city:”Cupertino”,
  zip:95014 }
   ]
 } 
 {code}
 The extra mapping parameters would be
 {noformat}
 child.split=o:/org
 o.f=name
 o.f=city
 o.f=zip
 {noformat}
 * o is the short name for that child. It is possible to map multiple children 
 with multiple shortnames
 * In this example all the o.* paths are relative. It is possible to five 
 absolute path names such as o.f=/org/name 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-03-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7143:
---
Attachment: SOLR-7143.patch

Multiple local parameters will not work in previous patch because MapString, 
String as result of local params parsing. Replaced it with MapString, 
String[] and MultiMapSolrParams in org.apache.solr.search.QParser and in 
usages. Also removed parseLocalParams (comma split params support could be 
complicated in case boost syntax in qf).

 MoreLikeThis Query Parser does not handle multiple field names
 --

 Key: SOLR-7143
 URL: https://issues.apache.org/jira/browse/SOLR-7143
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 5.0
Reporter: Jens Wille
Assignee: Anshum Gupta
 Attachments: SOLR-7143.patch, SOLR-7143.patch


 The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
 any results when supplied with multiple fields in the {{qf}} parameter.
 To reproduce within the techproducts example, compare:
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
 {code}
 The first two queries return 8 and 5 results, respectively. The third query 
 doesn't return any results (not even the matched document).
 In contrast, the MoreLikeThis Handler works as expected (accounting for the 
 default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=namemlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=featuresmlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=name,featuresmlt.mintf=1mlt.mindf=1'
 {code}
 After adding the following line to 
 {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
 {code:language=XML}
 requestHandler name=/mlt class=solr.MoreLikeThisHandler /
 {code}
 The first two queries return 7 and 4 results, respectively (excluding the 
 matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7062) CLUSTERSTATUS returns a collection with state=active, even though the collection could not be created due to a missing configSet

2015-03-16 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7062:
---
Attachment: SOLR-7062.patch

Added test reproducing issue in this issue and SOLR-7053. The reason is 
preRegister call in CoreContainer creating record in ZK. Core create is failed 
with exception but state in ZK remains active and inconsistent. There 2 options 
to solve this: rollback ZK data if core create was failed, check that configSet 
exists before creating core and throw exception. Implemented last one check.

 CLUSTERSTATUS returns a collection with state=active, even though the 
 collection could not be created due to a missing configSet
 

 Key: SOLR-7062
 URL: https://issues.apache.org/jira/browse/SOLR-7062
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.3
Reporter: Ng Agi
  Labels: solrcloud
 Attachments: SOLR-7062.patch


 A collection can not be created, if its configSet does not exist. 
 Nevertheless, a subsequent CLUSTERSTATUS CollectionAdminRequest returns this 
 collection with a state=active.
 See log below.
 {noformat}
 [INFO] Overseer Collection Processor: Get the message 
 id:/overseer/collection-queue-work/qn-000110 message:{
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [WARNING] OverseerCollectionProcessor.processMessage : createcollection , {
   operation:createcollection,
   fromApi:true,
   name:blueprint_media_comments,
   collection.configName:elastic,
   numShards:1,
   property.dataDir:data,
   property.instanceDir:cores/blueprint_media_comments}
 [INFO] creating collections conf node /collections/blueprint_media_comments 
 [INFO] makePath: /collections/blueprint_media_comments
 [INFO] Got user-level KeeperException when processing 
 sessionid:0x14b315b0f4a000e type:create cxid:0x2f2e zxid:0x2f4 txntype:-1 
 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for 
 /overseer
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] building a new collection: blueprint_media_comments
 [INFO] Create collection blueprint_media_comments with shards [shard1]
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] Creating SolrCores for new collection blueprint_media_comments, 
 shardNames [shard1] , replicationFactor : 1
 [INFO] Creating shard blueprint_media_comments_shard1_replica1 as part of 
 slice shard1 of collection blueprint_media_comments on localhost:44080_solr
 [INFO] core create command 
 qt=/admin/coresproperty.dataDir=datacollection.configName=elasticname=blueprint_media_comments_shard1_replica1action=CREATEnumShards=1collection=blueprint_media_commentsshard=shard1wt=javabinversion=2property.instanceDir=cores/blueprint_media_comments
 [INFO] publishing core=blueprint_media_comments_shard1_replica1 state=down 
 collection=blueprint_media_comments
 [INFO] LatchChildWatcher fired on path: /overseer/queue state: SyncConnected 
 type NodeChildrenChanged
 [INFO] look for our core node name
 [INFO] Update state numShards=1 message={
   core:blueprint_media_comments_shard1_replica1,
   roles:null,
   base_url:http://localhost:44080/solr;,
   node_name:localhost:44080_solr,
   numShards:1,
   state:down,
   shard:shard1,
   collection:blueprint_media_comments,
   operation:state}
 [INFO] A cluster state change: WatchedEvent state:SyncConnected 
 type:NodeDataChanged path:/clusterstate.json, has occurred - updating... 
 (live nodes size: 1)
 [INFO] waiting to find shard id in clusterstate for 
 blueprint_media_comments_shard1_replica1
 [INFO] Check for collection zkNode:blueprint_media_comments
 [INFO] Collection zkNode exists
 [INFO] Load collection config from:/collections/blueprint_media_comments
 [ERROR] Specified config does not exist in ZooKeeper:elastic
 [ERROR] Error creating core [blueprint_media_comments_shard1_replica1]: 
 Specified config does not exist in ZooKeeper:elastic
 org.apache.solr.common.cloud.ZooKeeperException: Specified config does not 
 exist in ZooKeeper:elastic
   at 
 org.apache.solr.common.cloud.ZkStateReader.readConfigName(ZkStateReader.java:160)
   at 
 org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:37)
   at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:58)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
   at 

[jira] [Updated] (SOLR-7052) Grouping on int field with docValues in SolrCloud raises exception.

2015-03-08 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7052:
---
Attachment: SOLR-7052.patch

I was not able to compile tag lucene_solr_4_8_1 becuase of final modifier on 
org.apache.lucene.codecs.memory.DirectDocValuesProducer#data, 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer#data, 
org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer#data, 
org.apache.lucene.codecs.memory.MemoryDocValuesProducer#data (removed final 
modifier and added to patch)
Added case with docValues on single and distributed grouping.
Issue reproduced in distributed mode on solr 4.8.1, but not reproduced solr 
trunk. The reason: numeric fields with docValues executed in 
org.apache.lucene.search.grouping.term.TermFirstPassGroupingCollector where 
binary and number with docValues are checked in 
org.apache.lucene.search.FieldCacheImpl#getTermsIndex(org.apache.lucene.index.AtomicReader,
 java.lang.String, float).
Added check for numeric fields to skip this collector. 
Backported changes from solr trunk LUCENE-5666: remove insanity during 
distributed grouping

 Grouping on int field with docValues in SolrCloud raises exception.
 ---

 Key: SOLR-7052
 URL: https://issues.apache.org/jira/browse/SOLR-7052
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.8.1
Reporter: Beliy
  Labels: SolrCloud, docValues, grouping
 Attachments: SOLR-7052.patch


 We have a grouping field which we defined as an integer; when we run a query 
 grouping on that field it works fine in a non-cloud configuration, but when 
 we try the same query in a SolrCloud configuration with multiple shards, we 
 get the following error:
 Type mismatch: fieldName was indexed as NUMERIC
 Schema:
 {code:xml}
 dynamicField name=*_i  type=intindexed=true  stored=true 
 docValues=true/
 {code}
 Query:
 {code}
 q=*:*group=truegroup.field=fieldNamegroup.limit=1
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-02-27 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-7143:
---
Attachment: SOLR-7143.patch

Local parameters are not support multiple values syntax like: {!mlt qf=field1  
qf=field2}, but qf list is required in MoreLikeThis. 
Added support for comma separated fields:  {!mlt qf=field1,field2}
Also comparing MLT handler query parser does not have any boost support on 
fields. This can be extended in qf parameter syntax.

 MoreLikeThis Query Parser does not handle multiple field names
 --

 Key: SOLR-7143
 URL: https://issues.apache.org/jira/browse/SOLR-7143
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 5.0
Reporter: Jens Wille
 Attachments: SOLR-7143.patch


 The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
 any results when supplied with multiple fields in the {{qf}} parameter.
 To reproduce within the techproducts example, compare:
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
 curl 
 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
 {code}
 The first two queries return 8 and 5 results, respectively. The third query 
 doesn't return any results (not even the matched document).
 In contrast, the MoreLikeThis Handler works as expected (accounting for the 
 default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
 {code}
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=namemlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=featuresmlt.mintf=1mlt.mindf=1'
 curl 
 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/Amlt.fl=name,featuresmlt.mintf=1mlt.mindf=1'
 {code}
 After adding the following line to 
 {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
 {code:language=XML}
 requestHandler name=/mlt class=solr.MoreLikeThisHandler /
 {code}
 The first two queries return 7 and 4 results, respectively (excluding the 
 matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6678) Collection/core reload is causing a memory leak

2015-02-25 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14337069#comment-14337069
 ] 

Vitaliy Zhovtyuk commented on SOLR-6678:


i did 100K reload on techproducts and other cores and i cannot reproduce issue 
in heap dump. Heap goes down after force GC.
Can you pls provide exact JRE version, options and solr config?

 Collection/core reload is causing a memory leak
 ---

 Key: SOLR-6678
 URL: https://issues.apache.org/jira/browse/SOLR-6678
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Alexey Serba
 Attachments: ReloadMemoryLeak.png


 I have a use case where I need to periodically 
 [reload|https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api2]
  a SolrCloud collection. Recently I did ~1k reload operations and noticed 
 that the cluster was running slower and slower, so I connected to it with 
 jconsole and noticed that heap was growing with every reload operation, 
 forcing GC wasn't helping.
 So I took a heap dump and noticed that I have too many SolrCore-s hanging 
 around. 
 It's hard for me to grok the root cause of this, but maybe someone more 
 knowledgable in Solr internals can figure it out by looking into this GC root 
 path (see attached image)? If I interpret this correctly, it looks like one 
 SolrCore is referencing another SolrCore through SolrSuggester. Maybe close 
 hook for SolrSuggester component doesn't release everything that it should be 
 releasing (like SolrSuggester.dictionary)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6878) solr.ManagedSynonymFilterFactory all-to-all synonym switch (aka. expand)

2015-02-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6878:
---
Attachment: SOLR-6878.patch

Added support for expand parameter and tests for both cases.

 solr.ManagedSynonymFilterFactory all-to-all synonym switch (aka. expand)
 

 Key: SOLR-6878
 URL: https://issues.apache.org/jira/browse/SOLR-6878
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 4.10.2
Reporter: Tomasz Sulkowski
  Labels: ManagedSynonymFilterFactory, REST, SOLR
 Attachments: SOLR-6878.patch


 Hi,
 After switching from SynonymFilterFactory to ManagedSynonymFilterFactory I 
 have found out that there is no way to set an all-to-all synonyms relation. 
 Basically (judgind from google search) there is a need for expand 
 functionality switch (known from SynonymFilterFactory) which will treat all 
 synonyms with its keyword as equal.
 For example: if we define a car:[wagen,ride] relation it would 
 translate a query that includes one of the synonyms or keyword to car or 
 wagen or ride independently of which word was used from those three.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6977) Given a date/time, facet only by time

2015-02-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6977:
---
Attachment: SOLR-6977.patch

Added code to group facet range counts by date parts: date and time separately.

 Given a date/time, facet only by time
 -

 Key: SOLR-6977
 URL: https://issues.apache.org/jira/browse/SOLR-6977
 Project: Solr
  Issue Type: Bug
  Components: faceting, SearchComponents - other
Reporter: Grant Ingersoll
 Attachments: SOLR-6977.patch


 Given a field that is indexed as date/time, it would be great if range 
 faceting could facet only on date or only on time as an option.  For 
 instance, given a months worth of date, I'd like to be able to see what are 
 the hotspots during throughout the day.  Now, I could index a separate time 
 field, but that seems redundant, as the data is already there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6977) Given a date/time, facet only by time

2015-02-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6977:
---
Attachment: SOLR-6977.patch

Clean for unnecessary changes

 Given a date/time, facet only by time
 -

 Key: SOLR-6977
 URL: https://issues.apache.org/jira/browse/SOLR-6977
 Project: Solr
  Issue Type: Bug
  Components: faceting, SearchComponents - other
Reporter: Grant Ingersoll
 Attachments: SOLR-6977.patch, SOLR-6977.patch


 Given a field that is indexed as date/time, it would be great if range 
 faceting could facet only on date or only on time as an option.  For 
 instance, given a months worth of date, I'd like to be able to see what are 
 the hotspots during throughout the day.  Now, I could index a separate time 
 field, but that seems redundant, as the data is already there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3218) Range faceting support for CurrencyField

2015-02-08 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3218:
---
Attachment: SOLR-3218.patch

1. updated to latest trunk
2. Changed Currency.toString() to strValue so that toString is not used in 
range calculation
3. Added test for stats.facet in currency type
4. Added stats support for currency type, stats calculated on default currency 
of type.
Min, max and sum return currency value with currency code, by default 
value.toString() is used to render those results, 
delegated toString to strValue
Mean, sumOfSquares, stddev just return numeric value in cents without currency 
code. 
(not sure that we need those for currency, i would remove them)
5. Added tests for stats on currency field


 Range faceting support for CurrencyField
 

 Key: SOLR-3218
 URL: https://issues.apache.org/jira/browse/SOLR-3218
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 4.9, Trunk

 Attachments: SOLR-3218-1.patch, SOLR-3218-2.patch, SOLR-3218.patch, 
 SOLR-3218.patch, SOLR-3218.patch, SOLR-3218.patch, SOLR-3218.patch, 
 SOLR-3218.patch


 Spinoff from SOLR-2202. Need to add range faceting capabilities for 
 CurrencyField



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6635) Cursormark should support skipping/goto functionality

2015-01-25 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6635:
---
Attachment: SOLR-6635.patch

As i understood skip case its about using skip to calculate next cursor mark, 
so this is pure cursor functionality. org.apache.solr.CursorPagingTest#testSkip 
illustrate the idea. This will require changes in TopDocs length calculation 
taking into account skip value for next cursor mark calculation. Also this code 
need to be refactored to remove duplicates. 
Another approach could be serialization for skip parameter  and with sort 
values in Cursor and use skip value in offset for next call.

 Cursormark should support skipping/goto functionality
 -

 Key: SOLR-6635
 URL: https://issues.apache.org/jira/browse/SOLR-6635
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Reporter: Thomas Blitz
  Labels: cursormark, pagination, search, solr
 Attachments: SOLR-6635.patch


 Deep pagination is possible with the cursormark.
 We have discovered a need to be able to 'skip' a number of results.
 Using the cursormark it should be possible to define a request with a skip 
 parameter, allowing the cursormark to simple skip a number of articles, kinda 
 like a goto, and then return results from that point in the resultset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2072) Search Grouping: expand group sort options

2014-12-26 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-2072:
---
Attachment: SOLR-2072.patch

Added test for sort groups and inside group by functions, added test to sort 
inside group by function in distributed mode

 Search Grouping: expand group sort options
 --

 Key: SOLR-2072
 URL: https://issues.apache.org/jira/browse/SOLR-2072
 Project: Solr
  Issue Type: Sub-task
Reporter: Yonik Seeley
 Attachments: SOLR-2072-01.patch, SOLR-2072.patch


 Ability to specify functions over group documents when sorting groups.  
 max(score) or avg(popularity), etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5660) Send request level commitWithin as a param rather than setting it per doc

2014-12-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5660:
---
Attachment: SOLR-5660.patch

Added handling commitWithin as parameter - its passed on request level, in case 
not present its passed per document.
Added test for request level, but failed to reproduce commitWithin per document 
(org.apache.solr.cloud.FullSolrCloudDistribCmdsTest#testIndexingCommitWithinOnAttr).
 
If 2documents contain different commitWithin value its failed with exception:
{quote}org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
Illegal to have multiple roots (start tag in epilog?).
 at [row,col {unknown-source}]: [1,236]
at 
__randomizedtesting.SeedInfo.seed([FC019F99FE2DEADF:7DE7118189728AE3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingCommitWithinOnAttr(FullSolrCloudDistribCmdsTest.java:183)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:143)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618){quote}

Is it bug due to malformed XML?
If one document contains commitWithin passed its does not taken into account 
(commitWithin=-1). Seems this value unmarshalled incorrectly in 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec#unmarshal


 Send request level commitWithin as a param rather than setting it per doc
 -

 Key: SOLR-5660
 URL: https://issues.apache.org/jira/browse/SOLR-5660
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, Trunk

 Attachments: SOLR-5660.patch


 In SolrCloud the commitWithin parameter is sent per-document even if it is 
 set on the entire request.
 We should send request level commitWithin as a param rather than setting it 
 per doc - that would mean less repeated data in the request. We still need to 
 properly support per doc like this as well though, because that is the level 
 cmd objects support and we are distributing cmd objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6376) Edismax field alias bug

2014-12-12 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14244559#comment-14244559
 ] 

Vitaliy Zhovtyuk commented on SOLR-6376:


Unittest use the same in configuration as in standalone version. Two more 
parameters present in browse request handler configuration needed to reproduce 
parsing problem:
   str name=qf
  text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
   /str
   str name=mm100%/str


 Edismax field alias bug
 ---

 Key: SOLR-6376
 URL: https://issues.apache.org/jira/browse/SOLR-6376
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1
Reporter: Thomas Egense
Priority: Minor
  Labels: difficulty-easy, edismax, impact-low
 Attachments: SOLR-6376.patch, SOLR-6376.patch


 If you create a field alias that maps to a nonexistent field, the query will 
 be parsed to utter garbage.
 The bug can reproduced very easily. Add the following line to the /browse 
 request handler in the tutorial example solrconfig.xml
 str name=f.name_features.qfname features XXX/str
 (XXX is a nonexistent field)
 This simple query will actually work correctly: 
 name_features:video
 and it will be parsed to  (features:video | name:video) and return 3 results. 
 It has simply discarded the nonexistent field and the result set is correct.
 However if you change the query to:
 name_features:video AND name_features:video
 you will now get 0 result and the query is parsed to 
 +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | 
 title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 
 | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) 
 (features:video | name:video))~3)
 Notice the AND operator is now used a term! The parsed query can turn out 
 even worse and produce query parts such as:
 title:2~2
 title:and^2.0^10.0  
 Prefered solution: During start up, shut down Solr if there is a nonexistant 
 field alias. Just as is the case if the cycle-detection detects a cycle:
 Acceptable solution: Ignore the nonexistant field totally.
 Thomas Egense



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-12-09 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---
Attachment: SOLR-6019.patch

Yes, you're right. Missing file 
{{server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml}} added.

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
 Attachments: SOLR-6016.patch, SOLR-6016.patch, SOLR-6016.patch, 
 solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that indexing exampledocs 

[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-12-09 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---
Attachment: (was: SOLR-6019.patch)

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
 Attachments: SOLR-6016.patch, SOLR-6016.patch, SOLR-6016.patch, 
 solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that indexing exampledocs should 
 work without errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-12-09 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---
Attachment: SOLR-6016.patch

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
 Attachments: SOLR-6016.patch, SOLR-6016.patch, SOLR-6016.patch, 
 SOLR-6016.patch, solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that indexing exampledocs should 
 work without errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SOLR-6019) Managed schema file does not show up in the Files UI

2014-12-07 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6019:
---
Attachment: SOLR-6019.patch

File should be available for viewing in admin UI, but should be restricted for 
edit.  Currently, both handlers on view and edit managed schema are restricted.
Changes:
1. Removed restriction on managed schema file ShowFileRequestHandler
2. Added restriction in file edit in EditFileRequestHandler
3. Fixed javascript issue to view file without extension: default content type 
passed.

 Managed schema file does not show up in the Files UI
 --

 Key: SOLR-6019
 URL: https://issues.apache.org/jira/browse/SOLR-6019
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, web gui
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
 Attachments: 6019-missing-managed-schema.png, SOLR-6019.patch


 When running with the schema-less example, I noticed that the managed-schema 
 file does not show in the Files section of the Admin UI. This can be 
 confusing for a user. To make sure it was not a caching issue on the browser, 
 I closed and opened the UI again in a new tab. I also restarted Solr and 
 still the managed-schema is not visible in the Files section. Interestingly, 
 the schema.xml.bak does show up. A screenshot of the UI is attached.
 It is possible that this bug affects other managed resources as well such as 
 synonyms but I haven't tested that yet.
 The schema browser works fine though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-12-07 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---
Attachment: SOLR-6016.patch

Updated to latest trunk. Removed whitespace changes.

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6016.patch, SOLR-6016.patch, SOLR-6016.patch, 
 solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that indexing exampledocs should 
 work without errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-12-01 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3881:
---
Attachment: SOLR-3881.patch

1. LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() still uses 
concatFields(), but it shouldn't – that was the whole point about moving it to 
TikaLanguageIdentifierUpdateProcessor; instead, 
LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() should loop over 
inputFields and call detector.append() (similarly to what concatFields() does).
[VZ] LangDetectLanguageIdentifierUpdateProcessor.detectLanguage() changed to 
use old flow with limit on field and max total on detector.
Each field value appended to detector.

2. concatFields() and getExpectedSize() should move to 
TikaLanguageIdentifierUpdateProcessor.
[VZ] Moved to TikaLanguageIdentifierUpdateProcessor. Tests using concatFields() 
moved to TikaLanguageIdentifierUpdateProcessorFactoryTest.

3. LanguageIdentifierUpdateProcessor.getExpectedSize() still takes a 
maxAppendSize, which didn't get renamed, but that param could be removed 
entirely, since maxFieldValueChars is available as a data member.
[VZ] Argument removed.

4. There are a bunch of whitespace changes in 
LanguageIdentifierUpdateProcessorFactoryTestCase.java - it makes reviewing 
patches significantly harder when they include changes like this. Your IDE 
should have settings that make it stop doing this.
[VZ] Whitespaces removed.

5. There is still some import reordering in 
TikaLanguageIdentifierUpdateProcessor.java.
[VZ] Fixed.

One last thing:
The total chars default should be its own setting; I was thinking we could make 
it double the per-value default?
[VZ] added default value to maxTotalChars and changed both to 10K like in 
com.cybozu.labs.langdetect.Detector.maxLength
Thanks for adding the total chars default, but you didn't make it double the 
field value chars default, as I suggested. Not sure if that's better - if the 
user specifies multiple fields and the first one is the only one that's used to 
determine the language because it's larger than the total char default, is that 
an issue? I was thinking that it would be better to visit at least one other 
field (hence the idea of total = 2 * per-field), but that wouldn't fully 
address the issue. What do you think?
[VZ] i think in most cases it will be only one field, but since both parameters 
are optional we should not restrict result if only per field specified more 
then 10K.
Updated total default value to 20K. 


 frequent OOM in LanguageIdentifierUpdateProcessor
 -

 Key: SOLR-3881
 URL: https://issues.apache.org/jira/browse/SOLR-3881
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
Reporter: Rob Tulloh
 Fix For: 4.9, Trunk

 Attachments: SOLR-3881.patch, SOLR-3881.patch, SOLR-3881.patch, 
 SOLR-3881.patch, SOLR-3881.patch


 We are seeing frequent failures from Solr causing it to OOM. Here is the 
 stack trace we observe when this happens:
 {noformat}
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2882)
 at 
 java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
 at 
 java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
 at java.lang.StringBuffer.append(StringBuffer.java:224)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
 at 
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
 

[jira] [Updated] (SOLR-6376) Edismax field alias bug

2014-11-30 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6376:
---
Attachment: SOLR-6376.patch

I was able to reproduce the issue with Unit test. Please see patch attached.
Parsing behavior look like expected.
When non existing field met, runtime exception indicating unknown field thrown, 
parser escape query and re-parced. 
In this case, parsed clauses converted to escaped myalias:Zapp AND 
myalias:Zapp, where AND escaped as literal.


 Edismax field alias bug
 ---

 Key: SOLR-6376
 URL: https://issues.apache.org/jira/browse/SOLR-6376
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1
Reporter: Thomas Egense
Priority: Minor
  Labels: difficulty-easy, edismax, impact-low
 Attachments: SOLR-6376.patch, SOLR-6376.patch


 If you create a field alias that maps to a nonexistent field, the query will 
 be parsed to utter garbage.
 The bug can reproduced very easily. Add the following line to the /browse 
 request handler in the tutorial example solrconfig.xml
 str name=f.name_features.qfname features XXX/str
 (XXX is a nonexistent field)
 This simple query will actually work correctly: 
 name_features:video
 and it will be parsed to  (features:video | name:video) and return 3 results. 
 It has simply discarded the nonexistent field and the result set is correct.
 However if you change the query to:
 name_features:video AND name_features:video
 you will now get 0 result and the query is parsed to 
 +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | 
 title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 
 | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) 
 (features:video | name:video))~3)
 Notice the AND operator is now used a term! The parsed query can turn out 
 even worse and produce query parts such as:
 title:2~2
 title:and^2.0^10.0  
 Prefered solution: During start up, shut down Solr if there is a nonexistant 
 field alias. Just as is the case if the cycle-detection detects a cycle:
 Acceptable solution: Ignore the nonexistant field totally.
 Thomas Egense



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5041) Add a test to make sure that a leader always recovers from log on startup

2014-11-23 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5041:
---
Attachment: SOLR-5041.patch

Added test to send updates and stop test server, then restart test server and 
commit. Added single and multiple shards tests.

 Add a test to make sure that a leader always recovers from log on startup
 -

 Key: SOLR-5041
 URL: https://issues.apache.org/jira/browse/SOLR-5041
 Project: Solr
  Issue Type: Test
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, Trunk

 Attachments: SOLR-5041.patch


 From my comment on SOLR-4997:
 bq. I fixed a bug that I had introduced which skipped log recovery on startup 
 for all leaders instead of only sub shard leaders. I caught this only because 
 I was doing another line-by-line review of all my changes. We should have a 
 test which catches such a condition.
 Add a test which tests that leaders always recover from log on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6054) Log progress of transaction log replays

2014-11-08 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14203587#comment-14203587
 ] 

Vitaliy Zhovtyuk commented on SOLR-6054:


i think this is enough to have state logging every one minute of replay, 
polling thread can lead to thread leaks issues.  
SOLR-6403 does it well. SOLR-6054 can be closed as duplicate.

 Log progress of transaction log replays
 ---

 Key: SOLR-6054
 URL: https://issues.apache.org/jira/browse/SOLR-6054
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.9, Trunk

 Attachments: SOLR-6054.patch


 There is zero logging of how a transaction log replay is progressing. We 
 should add some simple checkpoint based progress information. Logging the 
 size of the log file at the beginning would also be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-11-02 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Added whitebox DistributedFacetPivotWhiteBoxTest test simulating pivot stats 
shard requests in cases: get top level pivots and refinement requests. Both 
contains stats on pivots. 

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-28 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Restored FacetPivotSmallTest, was lost between patches.
Added distributed test 
org.apache.solr.handler.component.DistributedFacetPivotSmallAdvancedTestcovering
 3additional cases
 1. Getting pivot stats in string stats field
 2. Getting top level stats on pivot stats
 3. Pivot stats on each shard are not the same 
 
Added getter to check stats values presence on 
org.apache.solr.handler.component.PivotFacetValue#getStatsValues
 Whitebox test assertions are not yet completed. Still working on it.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6376) Edismax field alias bug

2014-10-26 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6376:
---
Attachment: SOLR-6376.patch

Issue not reproduced.
Not existing field completely ignored from parsed query.
Added test to TestExtendedDismaxParser to check this.

 Edismax field alias bug
 ---

 Key: SOLR-6376
 URL: https://issues.apache.org/jira/browse/SOLR-6376
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9
Reporter: Thomas Egense
Priority: Minor
  Labels: difficulty-easy, edismax, impact-low
 Attachments: SOLR-6376.patch


 If you create a field alias that maps to a nonexistent field, the query will 
 be parsed to utter garbage.
 The bug can reproduced very easily. Add the following line to the /browse 
 request handler in the tutorial example solrconfig.xml
 str name=f.name_features.qfname features XXX/str
 (XXX is a nonexistent field)
 This simple query will actually work correctly: 
 name_features:video
 and it will be parsed to  (features:video | name:video) and return 3 results. 
 It has simply discarded the nonexistent field and the result set is correct.
 However if you change the query to:
 name_features:video AND name_features:video
 you will now get 0 result and the query is parsed to 
 +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | 
 title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 
 | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) 
 (features:video | name:video))~3)
 Notice the AND operator is now used a term! The parsed query can turn out 
 even worse and produce query parts such as:
 title:2~2
 title:and^2.0^10.0  
 Prefered solution: During start up, shut down Solr if there is a nonexistant 
 field alias. Just as is the case if the cycle-detection detects a cycle:
 Acceptable solution: Ignore the nonexistant field totally.
 Thomas Egense



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-18 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Fixed TestCloudPivotFacet. The reason for previous random test failures were 
facet.limit, facet.offset, facet.overrequest.count, facet.overrequest.ratio 
parameters generated randomly,
this was leading to inconsistent stats with pivot stats. Added cleanup for 
those parameters before stats on pivots test. All tests are passing.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-12 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

During work on 
{code}
org.apache.solr.handler.component.DistributedFacetPivotLargeTest found:
junit.framework.AssertionFailedError: 
.facet_counts.facet_pivot.place_s,company_t[0].stats!=pivot (unordered or 
missing)
at 
__randomizedtesting.SeedInfo.seed([705F7E1C2B9679AA:F1B9F0045CC91996]:0)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:842)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:861)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:562)
{code}
This mean difference in named list order between control shard and random shard.
Found the reason in 
org.apache.solr.handler.component.PivotFacetValue#convertToNamedList

Due to this reason also updated DistributedFacetPivotSmallTest to use call 
org.apache.solr.BaseDistributedSearchTestCase#query(org.apache.solr.common.params.SolrParams)
 with response comparison.

Test org.apache.solr.handler.component.DistributedFacetPivotLongTailTest works 
only on string fields, added int field to make stats on it.

org.apache.solr.cloud.TestCloudPivotFacet: added buildRandomPivotStatsFields to 
build stats.filed list, this methods skip fields of string and boolean type 
since they are not supported. 
added tag string random generation on stat fields generated and if stats active 
tags also added to pivot fields.
 
Added handling for not present controls stats (count=0) in TestCloudPivotFacet.
About skipping stats if count=0, it is not really good cause we losing missing 
stats distribution.
Case with count=0, but missing=1 is real.
There are still some failures for TestCloudPivotFacet for date and double 
(precision?). Will check it tomorrow.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-05 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

1. Added stats fields pivot distribution to DistributedFacetPivotSmallTest
2. Fixed LinkedHashMap cannot be casted to NamedList exception, occurring on 
stats distribution (changed 
org.apache.solr.handler.component.PivotFacetHelper#convertStatsValuesToNamedList)

3. About Testing for not supported types, i think we need to cover/document 
limitations as well.I'm not happy with asserting error message.
 Added http code 400 assertion and error message substring assertion.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-05 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14159648#comment-14159648
 ] 

Vitaliy Zhovtyuk edited comment on SOLR-6351 at 10/5/14 7:20 PM:
-

1. Added stats fields pivot distribution to DistributedFacetPivotSmallTest 
(same data and values from 
org.apache.solr.handler.component.FacetPivotSmallTest used)
2. Fixed LinkedHashMap cannot be casted to NamedList exception, occurring on 
stats distribution (changed 
org.apache.solr.handler.component.PivotFacetHelper#convertStatsValuesToNamedList)

3. About Testing for not supported types, i think we need to cover/document 
limitations as well.I'm not happy with asserting error message.
 Added http code 400 assertion and error message substring assertion.


was (Author: vzhovtiuk):
1. Added stats fields pivot distribution to DistributedFacetPivotSmallTest
2. Fixed LinkedHashMap cannot be casted to NamedList exception, occurring on 
stats distribution (changed 
org.apache.solr.handler.component.PivotFacetHelper#convertStatsValuesToNamedList)

3. About Testing for not supported types, i think we need to cover/document 
limitations as well.I'm not happy with asserting error message.
 Added http code 400 assertion and error message substring assertion.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-02 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Combined with previous patch.
1. Added more solrj tests for stats on pivots
2. Fixed stats result
3. Minor tweaks

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-09-28 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6351:
---
Attachment: SOLR-6351.patch

Intermediate results:

1. Added pivot facet test to SolrExampleTests, extended 
org.apache.solr.client.solrj.SolrQuery to provide multipls stats.field parameter
2. Added FacetPivotSmallTest and moved all asserts from 
DistributedFacetPivotSmallTest to XPath assertions, separated it to few test 
methods
3. tag local parameter parsing added 
4. added org.apache.solr.handler.component.StatsInfo#tagToStatsFields and 
org.apache.solr.handler.component.StatsInfo#getStatsFieldsByTag 
to lookup list of stats fields by tag
5. Modified PivotFacetProcessor to collect and put StatValues for ever pivot 
field, added test to assert stats value of pivots
6. Updated PivotField and org.apache.solr.client.solrj.response.QueryResponse 
to read stats values on pivots

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1632) Distributed IDF

2014-09-28 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---
Attachment: (was: SOLR-5488.patch)

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.9, Trunk

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, 
 distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1632) Distributed IDF

2014-09-28 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---
Attachment: SOLR-1632.patch

Wrong patch was attached on 1.04.2014.
Updated previous changes to current trunk.
TestDefaultStatsCache, TestExactSharedStatsCache, TestExactStatsCache, 
TestLRUStatsCache are passing.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.9, Trunk

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 distrib-2.patch, distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6028) SOLR returns 500 error code for query /,/

2014-09-22 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6028:
---
Attachment: SOLR-6028.patch

Added catch and transformation to SolrException (BadRequest) that lead to HTTP 
400. The thing looks not really nice - IllegalArgumentException on parse 
problems in org.apache.lucene.search.RegexpQuery, shouldn't it be custom 
runtime exception.

 SOLR returns 500 error code for query /,/
 --

 Key: SOLR-6028
 URL: https://issues.apache.org/jira/browse/SOLR-6028
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.1
Reporter: Kingston Duffie
Priority: Minor
 Attachments: SOLR-6028.patch


 If you enter the following query string into the SOLR admin console to 
 execute a query, you will get a 500 error:
 /,/
 This is an invalid query -- in the sense that the field between the slashes 
 is not a valid regex.  Nevertheless, I would have expected to get a 400 error 
 rather than 500.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6009) edismax mis-parsing RegexpQuery

2014-09-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6009:
---
Attachment: SOLR-6009.patch

Actually there are 2linked issues:
1. edismax was not supported Regex queries
2. since regex queries was not supported RegexpQuery was created by 
org.apache.solr.parser.SolrQueryParserBase#getRegexpQuery without taking into 
account aliasing and 
org.apache.solr.search.ExtendedDismaxQParser#IMPOSSIBLE_FIELD_NAME

Attached patch provide support for RegexQueries and fix issue with leaking 
impossible field name. Also added tests covering case with defined field and 
undefined field (but matching by '*' dynamic field) and DebugQuery output.

 edismax mis-parsing RegexpQuery
 ---

 Key: SOLR-6009
 URL: https://issues.apache.org/jira/browse/SOLR-6009
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.7.2
Reporter: Evan Sayer
 Attachments: SOLR-6009.patch


 edismax appears to be leaking its IMPOSSIBLE_FIELD_NAME into queries 
 involving a RegexpQuery.  Steps to reproduce on 4.7.2:
 1) remove the explicit field / definition for 'text'
 2) add a catch-all '*' dynamic field of type text_general
 {code}
 dynamicField name=* type=text_general multiValued=true indexed=true 
 stored=true /
 {code}
 3) index the exampledocs/ data
 4) run a query like the following:
 {code}
 http://localhost:8983/solr/collection1/select?q={!edismax%20qf=%27text%27}%20/.*elec.*/debugQuery=true
 {code}
 The debugQuery output will look like this:
 {code}
 lst name=debug
 str name=rawquerystring{!edismax qf='text'} /.*elec.*//str
 str name=querystring{!edismax qf='text'} /.*elec.*//str
 str name=parsedquery(+RegexpQuery(:/.*elec.*/))/no_coord/str
 str name=parsedquery_toString+:/.*elec.*//str
 {code}
 If you copy/paste the parsed-query into a text editor or something, you can 
 see that the field-name isn't actually blank.  The IMPOSSIBLE_FIELD_NAME ends 
 up in there.
 I haven't been able to reproduce this behavior on 4.7.2 without getting rid 
 of the explicit field definition for 'text' and using a dynamicField, which 
 is how things are setup on the machine where this issue was discovered.  The 
 query isn't quite right with the explicit field definition in place either, 
 though:
 {code}
 lst name=debug
 str name=rawquerystring{!edismax qf='text'} /.*elec.*//str
 str name=querystring{!edismax qf='text'} /.*elec.*//str
 str name=parsedquery(+DisjunctionMaxQuery((text:elec)))/no_coord/str
 str name=parsedquery_toString+(text:elec)/str
 {code}
 numFound=0 for both of these.  This site is useful for looking at the 
 characters in the first variant:
 http://rishida.net/tools/conversion/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5992) add removeregex as an atomic update operation

2014-09-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5992:
---
Attachment: SOLR-5992.patch

Added removeregex atomic operation.
Works on single patterns or lists of patterns.
Regex patterns created before matching loop execution.

 add removeregex as an atomic update operation
 ---

 Key: SOLR-5992
 URL: https://issues.apache.org/jira/browse/SOLR-5992
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.9, Trunk
Reporter: Erick Erickson
 Attachments: SOLR-5992.patch


 Spinoff from SOLR-3862. If someone wants to pick it up, please do, create a 
 patch and assign to me. See the discussion at SOLR-3862 for some things to 
 bear in mind, especially the interface discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6541) Enhancement for SOLR-6452 StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-19 Thread Vitaliy Zhovtyuk (JIRA)
Vitaliy Zhovtyuk created SOLR-6541:
--

 Summary: Enhancement for SOLR-6452 StatsComponent missing stat 
won't work with docValues=true and indexed=false
 Key: SOLR-6541
 URL: https://issues.apache.org/jira/browse/SOLR-6541
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10, 6.0
Reporter: Vitaliy Zhovtyuk
Priority: Minor
 Fix For: 5.0, 6.0


This issue is refactoring of solution provided in SOLR-6452 StatsComponent 
missing stat won't work with docValues=true and indexed=false.
I think the following points need to be addressed:
1. Accumulate methods should not return stats specific numbers (it is generic). 
Attached solution with container class. Also made them private scoped.
Returning just missing fields from accumulate methods does not allow you to 
extend it with additional counts field, therefore i propose to leave void.
2. Reduced visibility of fields in FieldFacetStats.
3. Methods FieldFacetStats#accumulateMissing and 
FieldFacetStats#accumulateTermNum does not throw any IO exception
4. We don't need intermediate maps to accumulate missing counts. Method  
org.apache.solr.handler.component.FieldFacetStats#facetMissingNum 
can be changed to work directly on StatsValues structure and removed 
org.apache.solr.handler.component.FieldFacetStats#accumulateMissing. 
We don't need to have it in 2 phases.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6541) Enhancement for SOLR-6452 StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-19 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6541:
---
Attachment: SOLR-6541.patch

Patch based on trunk to address mentioned issues.

 Enhancement for SOLR-6452 StatsComponent missing stat won't work with 
 docValues=true and indexed=false
 

 Key: SOLR-6541
 URL: https://issues.apache.org/jira/browse/SOLR-6541
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10, 6.0
Reporter: Vitaliy Zhovtyuk
Priority: Minor
 Fix For: 5.0, 6.0

 Attachments: SOLR-6541.patch


 This issue is refactoring of solution provided in SOLR-6452 StatsComponent 
 missing stat won't work with docValues=true and indexed=false.
 I think the following points need to be addressed:
 1. Accumulate methods should not return stats specific numbers (it is 
 generic). Attached solution with container class. Also made them private 
 scoped.
 Returning just missing fields from accumulate methods does not allow you to 
 extend it with additional counts field, therefore i propose to leave void.
 2. Reduced visibility of fields in FieldFacetStats.
 3. Methods FieldFacetStats#accumulateMissing and 
 FieldFacetStats#accumulateTermNum does not throw any IO exception
 4. We don't need intermediate maps to accumulate missing counts. Method  
 org.apache.solr.handler.component.FieldFacetStats#facetMissingNum 
 can be changed to work directly on StatsValues structure and removed 
 org.apache.solr.handler.component.FieldFacetStats#accumulateMissing. 
 We don't need to have it in 2 phases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-13 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6452:
---
Attachment: SOLR-6452-trunk.patch

Added patch based on trunk.
Added facet fields tests for integer and double types, and facet value stats is 
not available. Does it require new issue or reopen SOLR-6452?

Implementation of this issue can be improved in few cases:
1. Accumulate methods should not return stats specific numbers (it is generic). 
Attached solution with container class. Also made them private scoped.
Returning just missing fields from accumulate methods does not allow you to 
extend it with additional counts field, therefore i propose to leave void.

2. Reduced visibility of fields in FieldFacetStats.
Created getter to expose FieldFacetStats.facetStatsValues.

3. Methods FieldFacetStats#accumulateMissing and 
FieldFacetStats#accumulateTermNum does not throw any IO exception

4. We don't need intermediate maps to accumulate missing counts. Changed 
org.apache.solr.handler.component.FieldFacetStats#facetMissingNum 
to work directly on StatsValues structure and removed  
org.apache.solr.handler.component.FieldFacetStats#accumulateMissing. 
We don't need to have it in 2 phases.

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10, 5.0
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Fix For: 4.11, 5.0

 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452-trunk.patch, SOLR-6452.patch, SOLR-6452.patch, SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6452) StatsComponent missing stat won't work with docValues=true and indexed=false

2014-09-10 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6452:
---
Attachment: SOLR-6452.patch

There are few issues in committed code addressed in attached patch:
1. Accumulate methods should not return stats specific numbers (it is generic 
calculation).
Attached solution with nested container class.
Also made them private scoped.

2. Reduced visibility of fields in FieldFacetStats.
Created getter to expose FieldFacetStats.facetStatsValues.

3. Methods FieldFacetStats#accumulateMissing and 
FieldFacetStats#accumulateTermNum does not throw any IO exception

4. Why missing facet counters cant work on StatsValues directly without 
intermediate maps. They have all required required methods that look like 
duplicated in org.apache.solr.handler.component.FieldFacetStats#facetMissingNum 
and org.apache.solr.handler.component.AbstractStatsValues#missing?
Will try to unite it.

 StatsComponent missing stat won't work with docValues=true and indexed=false
 --

 Key: SOLR-6452
 URL: https://issues.apache.org/jira/browse/SOLR-6452
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10, 5.0
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Attachments: SOLR-6452-trunk.patch, SOLR-6452-trunk.patch, 
 SOLR-6452.patch, SOLR-6452.patch, SOLR-6452.patch


 StatsComponent can work with DocValues, but it still required to use 
 indexed=true for the missing stat to work. Missing values should be 
 obtained from the docValues too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-31 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6024:
---
Attachment: SOLR-6024-trunk.patch

Patch based on latest added trunk patch.
Added stats calculation tests for docValues and multiValued fields of float and 
integer numeric types, added calculate distinct count, added stats.facet query 
on docValues field (leads to field type exception) 

 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, 
 SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, 
 SOLR-6024.patch, SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6054) Log progress of transaction log replays

2014-08-27 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6054:
---

Attachment: SOLR-6054.patch

Added log watches for every 10sec, added log statements to expose log internal 
state.

 Log progress of transaction log replays
 ---

 Key: SOLR-6054
 URL: https://issues.apache.org/jira/browse/SOLR-6054
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-6054.patch


 There is zero logging of how a transaction log replay is progressing. We 
 should add some simple checkpoint based progress information. Logging the 
 size of the log file at the beginning would also be useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6191) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-08-27 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14112984#comment-14112984
 ] 

Vitaliy Zhovtyuk commented on SOLR-6191:


About type safety: maybe wrong term, but in my opinion having 
{code}SolrParams.get(MoreLikeThisParameters.MLT){code} is more safe then 
{code}SolrParams.get(mlt){code} used a lot in code, MoreLikeThisParameters 
enum relates to documenting parameters and is not so error prone like string 
hardcodes, it should be same way for components and handlers to refer 
parameters.

 Self Describing SearchComponents, RequestHandlers, params. etc.
 ---

 Key: SOLR-6191
 URL: https://issues.apache.org/jira/browse/SOLR-6191
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk
Assignee: Noble Paul
  Labels: features
 Attachments: SOLR-6191.patch, SOLR-6191.patch, SOLR-6191.patch, 
 SOLR-6191.patch, SOLR-6191.patch, SOLR-6191.patch


 We should have self describing parameters for search components, etc.
 I think we should support UNIX style short and long names and that you should 
 also be able to get a short description of what a parameter does if you ask 
 for INFO on it.
 For instance, fl could also be fieldList, etc.
 Also, we should put this into the base classes so that new components can add 
 to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-24 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6024:
---

Attachment: SOLR-6024.patch

Added patch based for lucene_solr_4_9 branch fixing issue,
for fields having docValues called 
org.apache.solr.request.DocValuesStats#getCounts from rev. 1595259 and 
UnInvertedField in other cases.

 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024.patch, SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields

2014-08-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6024:
---

Attachment: SOLR-6024.patch

Issue reproduced in solr-4.9.0.
Issue not reproduced on trunk, attached test in patch don't get exception on 
query.
UnInvertedField invokation was removed in rev:
1595259 5/16/14 6:39 PM rmuir   1594441, 1593789LUCENE-5666: 
Add UninvertingReader


 StatsComponent does not work for docValues enabled multiValued fields
 -

 Key: SOLR-6024
 URL: https://issues.apache.org/jira/browse/SOLR-6024
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.8
 Environment: java version 1.7.0_45
 Mac OS X Version 10.7.5
Reporter: Ahmet Arslan
  Labels: StatsComponent, docValues, multiValued
 Fix For: 4.9

 Attachments: SOLR-6024.patch


 Harish Agarwal reported this in solr user mailing list : 
 http://search-lucene.com/m/QTPaoTJXV1
 It is east to re-produce with default example solr setup. Following types are 
 added example schema.xml. And exampledocs are indexed.
 {code:xml}
  field name=cat type=string indexed=true stored=true 
 docValues=true multiValued=true/
   field name=popularity type=int indexed=true stored=false 
 docValues=true multiValued=true/
 {code}
 When {{docValues=true}} *and* {{multiValued=true}} are used at the same 
 time, StatsComponent throws :
 {noformat}
 ERROR org.apache.solr.core.SolrCore  – org.apache.solr.common.SolrException: 
 Type mismatch: popularity was indexed as SORTED_SET
   at 
 org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193)
   at 
 org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319)
   at 
 org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290)
   at 
 org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6191) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-08-17 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100051#comment-14100051
 ] 

Vitaliy Zhovtyuk commented on SOLR-6191:


Annotation approach is beautiful, but it does not describe used in code 
parameter names since it just string constants. Original goal was to replace 
all parameter string names with strong typed enum containing parameter string 
description, see org.apache.solr.common.params.SolrParams. Probably make sense 
to use both ways and use Parameter enum implementing 
org.apache.solr.common.params.ParameterDescription as @Param value and don't 
require component to extend interface.

This will bring strong typing code usages and provide generic API for 
describing parameters.

 Self Describing SearchComponents, RequestHandlers, params. etc.
 ---

 Key: SOLR-6191
 URL: https://issues.apache.org/jira/browse/SOLR-6191
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk
Assignee: Noble Paul
  Labels: features
 Attachments: SOLR-6191.patch, SOLR-6191.patch, SOLR-6191.patch, 
 SOLR-6191.patch


 We should have self describing parameters for search components, etc.
 I think we should support UNIX style short and long names and that you should 
 also be able to get a short description of what a parameter does if you ask 
 for INFO on it.
 For instance, fl could also be fieldList, etc.
 Also, we should put this into the base classes so that new components can add 
 to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-08-17 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14100056#comment-14100056
 ] 

Vitaliy Zhovtyuk commented on SOLR-6016:


File solrconfig.xml does not contain whitespace changes, the order of 
typeMapping elements changed intentionally to favor Double over Integer, 
therefore any tests with random values will pass by this configuration (since 
all will be as double).

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6016.patch, SOLR-6016.patch, solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since 

[jira] [Updated] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-08-05 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3881:
---

Attachment: SOLR-3881.patch

About moving concatFields() to the tika language identifier: I think the way to 
go is just move the whole method there, then change the detectLanguage() method 
to take the SolrInputDocument instead of a String. You don't need to carry over 
the field[] parameter from concatFields(), since data member inputFields will 
be accessible everywhere it's needed.
[VZ] This call looks more cleaner now, i changed inputFields to private now to 
reduce visibility scope

I should have mentioned previously: I don't like the maxAppendSize and 
maxTotalAppendSize names - size is ambiguous (could refer to bytes, chars, 
whatever), and append refers to an internal operation... I'd like to see 
append=field value and size=chars: maxFieldValueChars, and 
maxTotalChars (since appending doesn't need to be mentioned for the global 
limit). The same thing goes for the default constants and the test method names.
[VZ] Renamed parameters and test methods

Some minor issues I found with your patch:
As I said previously: We should also set default maxima for both per-value and 
total chars, rather than MAX_INT, as in the current patch.
The total chars default should be its own setting; I was thinking we could make 
it double the per-value default?
[VZ] added default value to maxTotalChars and changed both to 10K like in 
com.cybozu.labs.langdetect.Detector.maxLength

It's better not to reorder import statements unless you're already making 
significant changes to them; it distracts from the meat of the change. (You 
reordered them in LangDetectLanguageIdentifierUpdateProcessor and 
LanguageIdentifierUpdateProcessorFactoryTestCase)
[VZ] This is IDE optimization to put imports in alphabetical order - restored 
it to original order

In LanguageIdentifierUpdateProcessor.concatFields(), when you trim the 
concatenated text to maxTotalAppendSize, I think 
StringBuilder.setLength(maxTotalAppendSize); would be more efficient than 
StringBuilder.delete(maxTotalAppendSize, sb.length() - 1);
[VZ] Yep, cleaned that 

In addition to the test you added for the global limit, we should also test 
using both the per-value and global limits at the same time.
[VZ] Tests for both limits added

 frequent OOM in LanguageIdentifierUpdateProcessor
 -

 Key: SOLR-3881
 URL: https://issues.apache.org/jira/browse/SOLR-3881
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
Reporter: Rob Tulloh
 Fix For: 4.9, 5.0

 Attachments: SOLR-3881.patch, SOLR-3881.patch, SOLR-3881.patch, 
 SOLR-3881.patch


 We are seeing frequent failures from Solr causing it to OOM. Here is the 
 stack trace we observe when this happens:
 {noformat}
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2882)
 at 
 java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
 at 
 java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
 at java.lang.StringBuffer.append(StringBuffer.java:224)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
 at 
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:100)
 at 
 

[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-08-03 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---

Attachment: SOLR-6016.patch

Added test with random order of values on schemaless example config, asserted 
thrown exception

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6016.patch, SOLR-6016.patch, solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
   at java.lang.Long.parseLong(Long.java:483)
   at org.apache.solr.schema.TrieField.createField(TrieField.java:609)
   at org.apache.solr.schema.TrieField.createFields(TrieField.java:660)
 {code}
 The full solr.log is attached.
 I understand why these errors occur but since we ship example data with Solr 
 to demonstrate our core features, I expect that indexing exampledocs should 
 work without errors.



--
This message was sent by Atlassian 

[jira] [Updated] (SOLR-6163) special chars and ManagedSynonymFilterFactory

2014-07-27 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6163:
---

Attachment: SOLR-6163.patch

Added change with decode=true
Checked org.restlet.data.Reference  methods usage, used only in 
org.apache.solr.rest.RestManager

 special chars and ManagedSynonymFilterFactory
 -

 Key: SOLR-6163
 URL: https://issues.apache.org/jira/browse/SOLR-6163
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Wim Kumpen
 Attachments: SOLR-6163.patch


 Hey,
 I was playing with the ManagedSynonymFilterFactory to create a synonym list 
 with the API. But I have difficulties when my keys contains special 
 characters (or spaces) to delete them...
 I added a key ééé that matches with some other words. It's saved in the 
 synonym file as ééé.
 When I try to delete it, I do:
 curl -X DELETE 
 http://localhost/solr/mycore/schema/analysis/synonyms/english/ééé;
 error message: %C3%A9%C3%A9%C3%A9%C2%B5 not found in 
 /schema/analysis/synonyms/english
 A wild guess from me is that %C3%A9 isn't decoded back to ééé. And that's why 
 he can't find the keyword?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6248) MoreLikeThis Query Parser

2014-07-24 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073671#comment-14073671
 ] 

Vitaliy Zhovtyuk commented on SOLR-6248:


With current implementation in patch mlt qparser can match document by unique 
field configured in schema and find similar document out of it. Parser syntax 
now look like {code}{!mlt id=17 qf=lowerfilt}lowerfilt:*{code} where id is 
value of unique field configure (not id column in schema), qf is matched 
fields to search.

About passing text this parser can be extended with text parameter, search 
document by this term and look for similar document using existing 
implementation.

 MoreLikeThis Query Parser
 -

 Key: SOLR-6248
 URL: https://issues.apache.org/jira/browse/SOLR-6248
 Project: Solr
  Issue Type: New Feature
Reporter: Anshum Gupta
 Attachments: SOLR-6248.patch


 MLT Component doesn't let people highlight/paginate and the handler comes 
 with an cost of maintaining another piece in the config. Also, any changes to 
 the default (number of results to be fetched etc.) /select handler need to be 
 copied/synced with this handler too.
 Having an MLT QParser would let users get back docs based on a query for them 
 to paginate, highlight etc. It would also give them the flexibility to use 
 this anywhere i.e. q,fq,bq etc.
 A bit of history about MLT (thanks to Hoss)
 MLT Handler pre-dates the existence of QParsers and was meant to take an 
 arbitrary query as input, find docs that match that 
 query, club them together to find interesting terms, and then use those 
 terms as if they were my main query to generate a main result set.
 This result would then be used as the set to facet, highlight etc.
 The flow: Query - DocList(m) - Bag (terms) - Query - DocList\(y)
 The MLT component on the other hand solved a very different purpose of 
 augmenting the main result set. It is used to get similar docs for each of 
 the doc in the main result set.
 DocSet\(n) - n * Bag (terms) - n * (Query) - n * DocList(m)
 The new approach:
 All of this can be done better and cleaner (and makes more sense too) using 
 an MLT QParser.
 An important thing to handle here is the case where the user doesn't have 
 TermVectors, in which case, it does what happens right now i.e. parsing 
 stored fields.
 Also, in case the user doesn't have a field (to be used for MLT) indexed, the 
 field would need to be a TextField with an index analyzer defined. This 
 analyzer will then be used to extract terms for MLT.
 In case of SolrCloud mode, '/get-termvectors' can be used after looking at 
 the schema (if TermVectors are enabled for the field). If not, a /get call 
 can be used to fetch the field and parse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6248) MoreLikeThis Query Parser

2014-07-23 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6248:
---

Attachment: SOLR-6248.patch

Added mlt qparser, works in single and cloud mode. Added support for numeric 
id. 
Result of mlt written as query result - not in MoreLikeThis. Added tests to 
call in single and cloud modes.

 MoreLikeThis Query Parser
 -

 Key: SOLR-6248
 URL: https://issues.apache.org/jira/browse/SOLR-6248
 Project: Solr
  Issue Type: New Feature
Reporter: Anshum Gupta
 Attachments: SOLR-6248.patch


 MLT Component doesn't let people highlight/paginate and the handler comes 
 with an cost of maintaining another piece in the config. Also, any changes to 
 the default (number of results to be fetched etc.) /select handler need to be 
 copied/synced with this handler too.
 Having an MLT QParser would let users get back docs based on a query for them 
 to paginate, highlight etc. It would also give them the flexibility to use 
 this anywhere i.e. q,fq,bq etc.
 A bit of history about MLT (thanks to Hoss)
 MLT Handler pre-dates the existence of QParsers and was meant to take an 
 arbitrary query as input, find docs that match that 
 query, club them together to find interesting terms, and then use those 
 terms as if they were my main query to generate a main result set.
 This result would then be used as the set to facet, highlight etc.
 The flow: Query - DocList(m) - Bag (terms) - Query - DocList\(y)
 The MLT component on the other hand solved a very different purpose of 
 augmenting the main result set. It is used to get similar docs for each of 
 the doc in the main result set.
 DocSet\(n) - n * Bag (terms) - n * (Query) - n * DocList(m)
 The new approach:
 All of this can be done better and cleaner (and makes more sense too) using 
 an MLT QParser.
 An important thing to handle here is the case where the user doesn't have 
 TermVectors, in which case, it does what happens right now i.e. parsing 
 stored fields.
 Also, in case the user doesn't have a field (to be used for MLT) indexed, the 
 field would need to be a TextField with an index analyzer defined. This 
 analyzer will then be used to extract terms for MLT.
 In case of SolrCloud mode, '/get-termvectors' can be used after looking at 
 the schema (if TermVectors are enabled for the field). If not, a /get call 
 can be used to fetch the field and parse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-07-22 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3881:
---

Attachment: SOLR-3881.patch

Added global limit to concatenated string
Added limit to detector to detector.setMaxTextLength(maxTotalAppendSize);

About moving 
org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor#concatFields 
to org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessor it's 
a bit unclear because concatFields is used in both 
org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessor and 
org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessor

 frequent OOM in LanguageIdentifierUpdateProcessor
 -

 Key: SOLR-3881
 URL: https://issues.apache.org/jira/browse/SOLR-3881
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
Reporter: Rob Tulloh
 Fix For: 4.9, 5.0

 Attachments: SOLR-3881.patch, SOLR-3881.patch, SOLR-3881.patch


 We are seeing frequent failures from Solr causing it to OOM. Here is the 
 stack trace we observe when this happens:
 {noformat}
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2882)
 at 
 java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
 at 
 java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
 at java.lang.StringBuffer.append(StringBuffer.java:224)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
 at 
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:100)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:47)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:58)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:435)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-4044) CloudSolrServer early connect problems

2014-07-13 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-4044:
---

Attachment: SOLR-4044.patch

Added test to reproduce the issue

 CloudSolrServer early connect problems
 --

 Key: SOLR-4044
 URL: https://issues.apache.org/jira/browse/SOLR-4044
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
Reporter: Grant Ingersoll
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-4044.patch


 If you call CloudSolrServer.connect() after Zookeeper is up, but before 
 clusterstate, etc. is populated, you will get No live SolrServer exceptions 
 (line 322 in LBHttpSolrServer):
 {code}
 throw new SolrServerException(No live SolrServers available to handle this 
 request);{code}
 for all requests made even though all the Solr nodes are coming up just fine. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6020) Auto-generate a unique key in schema-less mode if data does not have an id field

2014-07-13 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6020:
---

Attachment: SOLR-6020.patch

Added patch with changes to UUIDUpdateProcessorFactory and test.
UUIDUpdateProcessorFactory will use uniqueKeyField if its UUID and field is not 
defined in processor configuration.
Maybe make sense to throw exception if configured or uniqueKeyField is not UUID 
type. Currently it's ignored. 

 Auto-generate a unique key in schema-less mode if data does not have an id 
 field
 --

 Key: SOLR-6020
 URL: https://issues.apache.org/jira/browse/SOLR-6020
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6020.patch


 Currently it is not possible to use the schema-less example if my data does 
 not have an id field.
 I was indexing data where the unique field name was url in schema-less 
 mode. This requires one to first change unique key name in the schema and 
 then start solr and then index docs. If one had already started solr, one'd 
 first need to remove managed-schema, rename schema.xml.bak to schema.xml and 
 then make the necessary changes in schema.xml. I don't think we should fail 
 on such simple things.
 Here's what I propose:
 # We remove id and uniqueKey from the managed schema example
 # If there's a field named id in the document,  we use that as the uniqueKey
 # Else we fallback on generating a UUID or a signature field via an update 
 processor and store it as the unique key field. We can name it as id or 
 _id
 # But if a uniqueKey is already present in original schema.xml then we should 
 expect the incoming data to have that field and we should preserve the 
 current behavior of failing loudly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6016) Failure indexing exampledocs with example-schemaless mode

2014-07-13 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6016:
---

Attachment: SOLR-6016.patch

The reason is should be know issue with TypeMapping in schemaless mode:

When Integer type has priority over Double and Float and integer price 10 
come first it's mapped like org.apache.solr.schema.TrieLongField, next price 
values like 10.6 that supposed to be org.apache.solr.schema.TrieDoubleField 
fail to be indexed.

Therefore if Double has priority over Integer this will solve the issue, but 
then values that supposed to be like int and long can be mapped as double. I 
think make sense to distinguish numeric types with type ending 10i as integer 
and 10d as double and default type as integer. This allow to solve the issue 
with ambiguous types.

 Failure indexing exampledocs with example-schemaless mode
 -

 Key: SOLR-6016
 URL: https://issues.apache.org/jira/browse/SOLR-6016
 Project: Solr
  Issue Type: Bug
  Components: documentation, Schema and Analysis
Affects Versions: 4.7.2, 4.8
Reporter: Shalin Shekhar Mangar
 Attachments: SOLR-6016.patch, solr.log


 Steps to reproduce:
 # cd example; java -Dsolr.solr.home=example-schemaless/solr -jar start.jar
 # cd exampledocs; java -jar post.jar *.xml
 Output from post.jar
 {code}
 Posting files to base url http://localhost:8983/solr/update using 
 content-type application/xml..
 POSTing file gb18030-example.xml
 POSTing file hd.xml
 POSTing file ipod_other.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file ipod_video.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file manufacturers.xml
 POSTing file mem.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file money.xml
 POSTing file monitor2.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file monitor.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file mp500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file sd500.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 POSTing file solr.xml
 POSTing file utf8-example.xml
 POSTing file vidcard.xml
 SimplePostTool: WARNING: Solr returned an error #400 Bad Request
 SimplePostTool: WARNING: IOException while reading response: 
 java.io.IOException: Server returned HTTP response code: 400 for URL: 
 http://localhost:8983/solr/update
 14 files indexed.
 COMMITting Solr index changes to http://localhost:8983/solr/update..
 Time spent: 0:00:00.401
 {code}
 Exceptions in Solr (I am pasting just one of them):
 {code}
 5105 [qtp697879466-14] ERROR org.apache.solr.core.SolrCore  – 
 org.apache.solr.common.SolrException: ERROR: [doc=EN7800GTX/2DHTV/256M] Error 
 adding field 'price'='479.95' msg=For input string: 479.95
   at 
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
   at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
   at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
   at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 ..
 Caused by: java.lang.NumberFormatException: For input string: 479.95
   at 
 

[jira] [Updated] (SOLR-5095) SolrCore.infoRegistry needs overhauled with some form of namespacing

2014-06-29 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5095:
---

Attachment: SOLR-5095.patch

1. Made map key as JMX canonical name that correspond to registered mbean and 
unique
2. Overriden Map.get in order to translate existing key to JMX canonical name
3. Changed unregister method: removed unused InfoMBean parameter, changed 
unregister to work on canonical name 
4. JMX names reminded unchanged and backward compatible


 SolrCore.infoRegistry needs overhauled with some form of namespacing
 --

 Key: SOLR-5095
 URL: https://issues.apache.org/jira/browse/SOLR-5095
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-5095.patch, SOLR-5095_bug_demo.patch


 While investigating SOLR-3616 / SOLR-2715, I realized the failure i was 
 seeing didn't seem to be related to the initial report of that bug, and 
 instead seemed to be due to an obvious and fundemental limitation in the way 
 SolrCore keeps track of plugins using the infoRegistry: It's just a 
 {{MapString, SolrInfoMBean}} keyed off of the name of the plugin, but there 
 is not namespacing used in the infoRegistry, so two completley different 
 types of plugins with the same name will overwrite each other.
 When looking at data using something like /admin/mbeans, this manifests 
 itself solely as missing objects: last one .put() into the infoRegistry 
 wins -- using JMX, both objects are actually visible because of how JMX 
 ObjectNames are built arround a set of key=val pairs, and a bug in how 
 JmxMonitorMap unregisters existing MBeans when .put() is called on a key it 
 already knows about (the unregister call is made using an ObjectName built 
 using the infoBean passed to the put() call -- if infoBean.getName() is not 
 exactly the same as the previous infoBean put() with the same key, then the 
 MbeanServer will continue to know about both of them)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5480) Make MoreLikeThisHandler distributable

2014-06-23 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5480:
---

Attachment: SOLR-5480.patch

Updated patch sources to latest trunk. There are two approaches to get 
distributed MLT:
1. Added mlt qparser, works in single and cloud mode. Added support for numeric 
id. 
Result of mlt written as query result - not in MoreLikeThis. Added tests to 
call in single and cloud modes.

2. SingleMLT component, mlt per shards distribution, added 
org.apache.solr.handler.DistributedMoreLikeThisHandlerTest

 Make MoreLikeThisHandler distributable
 --

 Key: SOLR-5480
 URL: https://issues.apache.org/jira/browse/SOLR-5480
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Molloy
Assignee: Noble Paul
 Attachments: SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, 
 SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch


 The MoreLikeThis component, when used in the standard search handler supports 
 distributed searches. But the MoreLikeThisHandler itself doesn't, which 
 prevents from say, passing in text to perform the query. I'll start looking 
 into adapting the SearchHandler logic to the MoreLikeThisHandler. If anyone 
 has some work done already and want to share, or want to contribute, any help 
 will be welcomed. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6190) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-06-23 Thread Vitaliy Zhovtyuk (JIRA)
Vitaliy Zhovtyuk created SOLR-6190:
--

 Summary: Self Describing SearchComponents, RequestHandlers, 
params. etc.
 Key: SOLR-6190
 URL: https://issues.apache.org/jira/browse/SOLR-6190
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk


We should have self describing parameters for search components, etc.
I think we should support UNIX style short and long names and that you should 
also be able to get a short description of what a parameter does if you ask for 
INFO on it.

For instance, fl could also be fieldList, etc.
Also, we should put this into the base classes so that new components can add 
to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6191) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-06-23 Thread Vitaliy Zhovtyuk (JIRA)
Vitaliy Zhovtyuk created SOLR-6191:
--

 Summary: Self Describing SearchComponents, RequestHandlers, 
params. etc.
 Key: SOLR-6191
 URL: https://issues.apache.org/jira/browse/SOLR-6191
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk


We should have self describing parameters for search components, etc.
I think we should support UNIX style short and long names and that you should 
also be able to get a short description of what a parameter does if you ask for 
INFO on it.

For instance, fl could also be fieldList, etc.
Also, we should put this into the base classes so that new components can add 
to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6190) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-06-23 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk closed SOLR-6190.
--

Resolution: Invalid

 Self Describing SearchComponents, RequestHandlers, params. etc.
 ---

 Key: SOLR-6190
 URL: https://issues.apache.org/jira/browse/SOLR-6190
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk

 We should have self describing parameters for search components, etc.
 I think we should support UNIX style short and long names and that you should 
 also be able to get a short description of what a parameter does if you ask 
 for INFO on it.
 For instance, fl could also be fieldList, etc.
 Also, we should put this into the base classes so that new components can add 
 to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6191) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-06-23 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-6191:
---

Attachment: SOLR-6191.patch

All methods in SolrParams are migrated to enum approach, old methods with 
string field name marked as deprecated.
Migrated MLT params to enum containing params description.
Component/Handler that has self described parameters should implement
SelfDescribableParametersT where T is enum type describing each parameter. 
This enum type should implement ParameterDescription interface in order to 
provide same contact for all parameter description enums.

 Self Describing SearchComponents, RequestHandlers, params. etc.
 ---

 Key: SOLR-6191
 URL: https://issues.apache.org/jira/browse/SOLR-6191
 Project: Solr
  Issue Type: Bug
Reporter: Vitaliy Zhovtyuk
  Labels: features
 Attachments: SOLR-6191.patch


 We should have self describing parameters for search components, etc.
 I think we should support UNIX style short and long names and that you should 
 also be able to get a short description of what a parameter does if you ask 
 for INFO on it.
 For instance, fl could also be fieldList, etc.
 Also, we should put this into the base classes so that new components can add 
 to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-04-06 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3881:
---

Attachment: SOLR-3881.patch

Updated to latest trunk.
Fixed multivalue support.
Added string size calculation as string builder capacity. Used to prevent 
multiple array allocation on append. (Maybe also need to be configurable - for 
large documents only) 

 frequent OOM in LanguageIdentifierUpdateProcessor
 -

 Key: SOLR-3881
 URL: https://issues.apache.org/jira/browse/SOLR-3881
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
Reporter: Rob Tulloh
 Fix For: 4.8

 Attachments: SOLR-3881.patch, SOLR-3881.patch


 We are seeing frequent failures from Solr causing it to OOM. Here is the 
 stack trace we observe when this happens:
 {noformat}
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2882)
 at 
 java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
 at 
 java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
 at java.lang.StringBuffer.append(StringBuffer.java:224)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
 at 
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:100)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:47)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:58)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:435)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1632) Distributed IDF

2014-04-01 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---

Attachment: SOLR-5488.patch

- Fixed global stats distribution
- Added assert on query explain (docNum, weight and idf should be the same in 
distributed tests), this assert is valid on 2nd query only since global stats 
merged in the end of 1st query.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.8, 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-5488.patch, 
 distrib-2.patch, distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-30 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5488:
---

Attachment: SOLR-5488.patch

Fixed FieldFacetExtrasTest. Tests passing for me now.

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-27 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5488:
---

Attachment: SOLR-5488.patch

The following changes were done:
1. Removed throw new IllegalArgumentException(No stat named '+stat+' in this 
collector  + this); in 
org.apache.solr.analytics.statistics.MinMaxStatsCollector#getStat because when 
no stats collected and this method requested it's not excpetional case.
2. Fixed 
org.apache.solr.analytics.facet.FieldFacetTest#perc20Test
org.apache.solr.analytics.facet.FieldFacetTest#perc60Test
The reason is stats name incompatibility in 
org.apache.solr.analytics.expression.BaseExpression#getValue like percentile 
and precentile_60. Fixed by composible stat name for percentile calls from 
percentile_ + second function argument in 
org.apache.solr.analytics.statistics.StatsCollectorSupplierFactory#create.
3. Fixed 
org.apache.solr.analytics.util.valuesource.FunctionTest#constantStringTest
and org.apache.solr.analytics.util.valuesource.FunctionTest#multiplyTest.
The reason order incompetibility in Maps containing stats in 
org.apache.solr.analytics.statistics.StatsCollectorSupplierFactory#create
Changed to TreeMap since order by stat string should be same.

4. Fixed org.apache.solr.analytics.facet.FieldFacetTest#missingFacetTest
by adding to 
org.apache.solr.analytics.accumulator.FacetingAccumulator#FacetingAccumulator 
always same order for facet fields.

5. Tests in org.apache.solr.analytics.facet.FieldFacetTest was working unstable 
because they depended on facet fields order returned from query. Added sorting 
of result and then assert Also added sorting of stdev results between assert.

6. Removed //nocommit in row in 
org.apache.solr.analytics.AbstractAnalyticsStatsTest since it did not pass 
precommit.


 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5394) facet.method=fcs seems to be using threads when it shouldn't

2014-03-19 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5394:
---

Attachment: SOLR-5394.patch

Attached patch contains 3tests reproducing issues with thread number.
There 2 unrelated usages of SimpleFacets.threads with different initialization:
- facet.threads - pool size for getting term count per each faces field.
Synchrous exectuion if 0.
- pool size of org.apache.solr.request.PerSegmentSingleValuedFaceting from 
local parameters used in query like {!prefix f=bla threads=3 
ex=text:bla}signatureField
If negative or zero thread number pased, then used MAX_INT as thread number - 
int threads = nThreads = 0 ? Integer.MAX_VALUE : nThreads;
Default value as -1 could be the issue.
About proposed fix i dont see any good reason to keep negative threads number 
by default. Absolute limit for threads if negative should be -1.
I propose to set threads=1 by default meaning single thread execution, if 
unspecified. 
If it's requried to get MAX_INT thread pool (with is unlimited threads number) 
is can be specified in query as -1.



 facet.method=fcs seems to be using threads when it shouldn't
 

 Key: SOLR-5394
 URL: https://issues.apache.org/jira/browse/SOLR-5394
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6
Reporter: Michael McCandless
 Attachments: SOLR-5394.patch, 
 SOLR-5394_keep_threads_original_value.patch


 I built a wikipedia index, with multiple fields for faceting.
 When I do facet.method=fcs with facet.field=dateFacet and 
 facet.field=userNameFacet, and then kill -QUIT the java process, I see a 
 bunch (46, I think) of facetExecutor-7-thread-N threads had spun up.
 But I thought threads for each field is turned off by default?
 Even if I add facet.threads=0, it still spins up all the threads.
 I think something is wrong in SimpleFacets.parseParams; somehow, that method 
 returns early (because localParams) is null, leaving threads=-1, and then the 
 later code that would have set threads to 0 never runs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5763) Upgrade to Tika 1.5

2014-03-18 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5763:
---

Attachment: SOLR-5763.patch

Updated versions and checksums:
pdfbox  1.8.1 - 1.8.4
jempbox 1.8.1 - 1.8.4
fontbox 1.8.1 - 1.8.4

 Upgrade to Tika 1.5
 ---

 Key: SOLR-5763
 URL: https://issues.apache.org/jira/browse/SOLR-5763
 Project: Solr
  Issue Type: Task
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Steve Rowe
Priority: Minor
 Attachments: SOLR-5763.patch, SOLR-5763.patch, SOLR-5763.patch


 Just released: http://www.apache.org/dist/tika/CHANGES-1.5.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5763) Upgrade to Tika 1.5

2014-03-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5763:
---

Attachment: SOLR-5763.patch

Removed duplicates in solr-cell ivy.xml
Updated tika to 1.5 and updated tika's dependencies:
pdfbox  1.8.1 - 1.8.2
jempbox 1.8.1 - 1.8.2
fontbox 1.8.1 - 1.8.2
POI 3.9 - 3.10-beta2
xz 1.0 - 1.2

 Upgrade to Tika 1.5
 ---

 Key: SOLR-5763
 URL: https://issues.apache.org/jira/browse/SOLR-5763
 Project: Solr
  Issue Type: Task
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Steve Rowe
Priority: Minor
 Attachments: SOLR-5763.patch


 Just released: http://www.apache.org/dist/tika/CHANGES-1.5.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5763) Upgrade to Tika 1.5

2014-03-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5763:
---

Attachment: SOLR-5763.patch

Updated sha1 checksumms

 Upgrade to Tika 1.5
 ---

 Key: SOLR-5763
 URL: https://issues.apache.org/jira/browse/SOLR-5763
 Project: Solr
  Issue Type: Task
  Components: contrib - Solr Cell (Tika extraction)
Reporter: Steve Rowe
Priority: Minor
 Attachments: SOLR-5763.patch, SOLR-5763.patch


 Just released: http://www.apache.org/dist/tika/CHANGES-1.5.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1604) Wildcards, ORs etc inside Phrase Queries

2014-03-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1604:
---

Attachment: SOLR-1604.patch

Probably it make sense to base plug-in on solr sources.
Attached patch contains: 
org.apache.lucene.queryparser.classic.ComplexPhraseQueryParser and 
org.apache.solr.search.ComplexPhraseQParserPlugin in solr and lucene sources 
respectively. 
Made some code cleanup.
Added tests on lucene side for 
org.apache.lucene.queryparser.classic.ComplexPhraseQueryParser.
Renamed test resources for solr plugin test.
I continue working on name:jo* prefix query issue when Highlighter return empty 
string instead of highlight matched term (TODO added).

 Wildcards, ORs etc inside Phrase Queries
 

 Key: SOLR-1604
 URL: https://issues.apache.org/jira/browse/SOLR-1604
 Project: Solr
  Issue Type: Improvement
  Components: query parsers, search
Affects Versions: 1.4
Reporter: Ahmet Arslan
Assignee: Erick Erickson
Priority: Minor
 Attachments: ASF.LICENSE.NOT.GRANTED--ComplexPhrase.zip, 
 ComplexPhrase-4.2.1.zip, ComplexPhrase-4.7.zip, ComplexPhrase.zip, 
 ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, 
 ComplexPhrase.zip, ComplexPhraseQueryParser.java, ComplexPhrase_solr_3.4.zip, 
 SOLR-1604-alternative.patch, SOLR-1604.patch, SOLR-1604.patch, SOLR-1604.patch


 Solr Plugin for ComplexPhraseQueryParser (LUCENE-1486) which supports 
 wildcards, ORs, ranges, fuzzies inside phrase queries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1604) Wildcards, ORs etc inside Phrase Queries

2014-03-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1604:
---

Attachment: SOLR-1604.patch

 I mean 2 things:
1. Put plug-in sources in Solr codebase (not like separate lib jar)
2. Make org.apache.solr.search.ComplexPhraseQParserPlugin as built-in standard 
plug-ins

Added patch fixing name:jo* issue with highlighter.
Tests in org.apache.solr.search.ComplexPhraseQParserPluginTest uncommented and 
passing.

 Wildcards, ORs etc inside Phrase Queries
 

 Key: SOLR-1604
 URL: https://issues.apache.org/jira/browse/SOLR-1604
 Project: Solr
  Issue Type: Improvement
  Components: query parsers, search
Affects Versions: 1.4
Reporter: Ahmet Arslan
Assignee: Erick Erickson
Priority: Minor
 Attachments: ASF.LICENSE.NOT.GRANTED--ComplexPhrase.zip, 
 ComplexPhrase-4.2.1.zip, ComplexPhrase-4.7.zip, ComplexPhrase.zip, 
 ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, ComplexPhrase.zip, 
 ComplexPhrase.zip, ComplexPhraseQueryParser.java, ComplexPhrase_solr_3.4.zip, 
 SOLR-1604-alternative.patch, SOLR-1604.patch, SOLR-1604.patch, 
 SOLR-1604.patch, SOLR-1604.patch


 Solr Plugin for ComplexPhraseQueryParser (LUCENE-1486) which supports 
 wildcards, ORs, ranges, fuzzies inside phrase queries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3177) Excluding tagged filter in StatsComponent

2014-03-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3177:
---

Attachment: SOLR-3177.patch

Removed grouping code from 
org.apache.solr.handler.component.SimpleStats#parseParams.

Grouping functionality in stats need ticket to address it separately.


 Excluding tagged filter in StatsComponent
 -

 Key: SOLR-3177
 URL: https://issues.apache.org/jira/browse/SOLR-3177
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 3.5, 3.6, 4.0-ALPHA, 4.1
Reporter: Mathias H.
Assignee: Shalin Shekhar Mangar
Priority: Minor
  Labels: localparams, stats, statscomponent
 Attachments: SOLR-3177.patch, SOLR-3177.patch, SOLR-3177.patch


 It would be useful to exclude the effects of some fq params from the set of 
 documents used to compute stats -- similar to 
 how you can exclude tagged filters when generating facet counts... 
 https://wiki.apache.org/solr/SimpleFacetParameters#Tagging_and_excluding_Filters
 So that it's possible to do something like this... 
 http://localhost:8983/solr/select?fq={!tag=priceFilter}price:[1 TO 
 20]q=*:*stats=truestats.field={!ex=priceFilter}price 
 If you want to create a price slider this is very useful because then you can 
 filter the price ([1 TO 20) and nevertheless get the lower and upper bound of 
 the unfiltered price (min=0, max=100):
 {noformat}
 |-[---]--|
 $0 $1 $20$100
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1632) Distributed IDF

2014-03-09 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---

Attachment: SOLR-1632.patch

Updated to latest trunk.
Cleaned code duplicates. Fixed org.apache.solr.search.stats.TestLRUStatsCache, 
added test for org.apache.solr.search.stats.ExactSharedStatsCache.
Fixed javadocs.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.7, 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, 
 distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3218) Range faceting support for CurrencyField

2014-03-03 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3218:
---

Attachment: SOLR-3218.patch

Fixed some javadocs

 Range faceting support for CurrencyField
 

 Key: SOLR-3218
 URL: https://issues.apache.org/jira/browse/SOLR-3218
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 4.7

 Attachments: SOLR-3218-1.patch, SOLR-3218-2.patch, SOLR-3218.patch, 
 SOLR-3218.patch, SOLR-3218.patch, SOLR-3218.patch, SOLR-3218.patch


 Spinoff from SOLR-2202. Need to add range faceting capabilities for 
 CurrencyField



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5466) Add List Collections functionality to Collections API

2014-02-25 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5466:
---

Attachment: SOLR-5466.patch

Added STATUS operation for collection, for specific collection by collection 
parameter, for specific collection and shard (comma separated shard parameter), 
all properties retrieved from cluster state without request to ZK host.
Collection status action is similar to core admin STATUS call.

I left LISTCOLLECTIONS action as is cause user should have an option wheather 
to get collection status from cluster state or from ZK host directly.

 Add List Collections functionality to Collections API
 -

 Key: SOLR-5466
 URL: https://issues.apache.org/jira/browse/SOLR-5466
 Project: Solr
  Issue Type: Sub-task
  Components: scripts and tools, SolrCloud
 Environment: All
Reporter: Dave Seltzer
Assignee: Shalin Shekhar Mangar
Priority: Minor
  Labels: api, collections, rest
 Attachments: SOLR-5466.patch, SOLR-5466.patch


 The collections API lets you add, delete and modify existing collections. At 
 the moment the API does not let you get a list of current collections or view 
 information about a specific collection.
 The workaround is the use the Zookeeper API to get the list. This makes the 
 Collections API harder to work with. 
 Adding an action=LIST would significantly improve the function of this API.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1880) Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY stage gets all fields

2014-02-19 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13905530#comment-13905530
 ] 

Vitaliy Zhovtyuk commented on SOLR-1880:


Yes, this optimization will work in case fl=id,score only

 Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY 
 stage gets all fields
 ---

 Key: SOLR-1880
 URL: https://issues.apache.org/jira/browse/SOLR-1880
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Shawn Smith
Assignee: Shalin Shekhar Mangar
 Attachments: ASF.LICENSE.NOT.GRANTED--one-pass-query-v1.4.0.patch, 
 ASF.LICENSE.NOT.GRANTED--one-pass-query.patch, SOLR-1880.patch


 Right now, a typical distributed search using QueryComponent makes two HTTP 
 requests to each shard:
 # STAGE_EXECUTE_QUERY executes one HTTP request to each shard to get top N 
 ids and sort keys, merges the results to produce a final list of document IDs 
 (PURPOSE_GET_TOP_IDS).
 # STAGE_GET_FIELDS executes a second HTTP request to each shard to get the 
 document field values for the final list of document IDs (PURPOSE_GET_FIELDS).
 If the fl param is just id or just id,score, all document data to 
 return is already fetched by STAGE_EXECUTE_QUERY.  The second 
 STAGE_GET_FIELDS query is completely unnecessary.  Eliminating that 2nd HTTP 
 request can make a big difference in overall performance.
 Also, the fl param only gets id, score and sort columns, it would probably 
 be cheaper to fetch the final sort column data in STAGE_EXECUTE_QUERY which 
 has to read the sort column data anyway, and skip STAGE_GET_FIELDS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2908) To push the terms.limit parameter from the master core to all the shard cores.

2014-02-19 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-2908:
---

Attachment: SOLR-2908.patch

if you want to override only limit passed to shards, other params also should 
not be passed,{code}sreq.params.set(TermsParams.TERMS_SORT, 
TermsParams.TERMS_SORT_INDEX);
  sreq.params.set(TermsParams.TERMS_LIMIT, -1);
  sreq.params.remove(TermsParams.TERMS_MAXCOUNT);
  sreq.params.remove(TermsParams.TERMS_MINCOUNT);{code} otherwise 
completely wrong (unsorted) terms will be limited and passed from shard.
Please see attached patch illustrating this idea, by adding 
'shards.terms.limit' parameter.
Note that problem with inconsistent results still exists, but this can be 
minimized with combining terms.limit and shards.terms.limit like in 
org.apache.solr.handler.component.DistributedTermsComponentParametersTest.

 To push the terms.limit parameter from the master core to all the shard cores.
 --

 Key: SOLR-2908
 URL: https://issues.apache.org/jira/browse/SOLR-2908
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 1.4.1
 Environment: Linux server. 64 bit processor and 16GB Ram.
Reporter: sivaganesh
Assignee: Shalin Shekhar Mangar
Priority: Critical
  Labels: patch
 Fix For: 4.7

 Attachments: SOLR-2908.patch, SOLR-2908.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 When we pass the terms.limit parameter to the master (which has many shard 
 cores), it's not getting pushed down to the individual cores. Instead the 
 default value of -1 is assigned to Terms.limit parameter is assigned in the 
 underlying shard cores. The issue being the time taken by the Master core to 
 return the required limit of terms is higher when we are having more number 
 of underlying shard cores. This affects the performances of the auto suggest 
 feature. 
 Can thought we can have a parameter to explicitly override the -1 being set 
 to Terms.limit in shards core.
 We saw the source code(TermsComponent.java) and concluded that the same. 
 Please help us in pushing the terms.limit parameter to shard cores. 
 PFB code snippet.
 private ShardRequest createShardQuery(SolrParams params) {
 ShardRequest sreq = new ShardRequest();
 sreq.purpose = ShardRequest.PURPOSE_GET_TERMS;
 // base shard request on original parameters
 sreq.params = new ModifiableSolrParams(params);
 // remove any limits for shards, we want them to return all possible
 // responses
 // we want this so we can calculate the correct counts
 // dont sort by count to avoid that unnecessary overhead on the shards
 sreq.params.remove(TermsParams.TERMS_MAXCOUNT);
 sreq.params.remove(TermsParams.TERMS_MINCOUNT);
 sreq.params.set(TermsParams.TERMS_LIMIT, -1);
 sreq.params.set(TermsParams.TERMS_SORT, TermsParams.TERMS_SORT_INDEX);
 return sreq;
   }
 Solr Version:
 Solr Specification Version: 1.4.0.2010.01.13.08.09.44 
  Solr Implementation Version: 1.5-dev exported - yonik - 2010-01-13 08:09:44 
  Lucene Specification Version: 2.9.1-dev 
  Lucene Implementation Version: 2.9.1-dev 888785 - 2009-12-09 18:03:31 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3177) Excluding tagged filter in StatsComponent

2014-02-19 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3177:
---

Attachment: SOLR-3177.patch

Updated to latest trunk. Fixed StatsComponent to pass tests.

 Excluding tagged filter in StatsComponent
 -

 Key: SOLR-3177
 URL: https://issues.apache.org/jira/browse/SOLR-3177
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 3.5, 3.6, 4.0-ALPHA, 4.1
Reporter: Mathias H.
Assignee: Shalin Shekhar Mangar
Priority: Minor
  Labels: localparams, stats, statscomponent
 Attachments: SOLR-3177.patch, SOLR-3177.patch


 It would be useful to exclude the effects of some fq params from the set of 
 documents used to compute stats -- similar to 
 how you can exclude tagged filters when generating facet counts... 
 https://wiki.apache.org/solr/SimpleFacetParameters#Tagging_and_excluding_Filters
 So that it's possible to do something like this... 
 http://localhost:8983/solr/select?fq={!tag=priceFilter}price:[1 TO 
 20]q=*:*stats=truestats.field={!ex=priceFilter}price 
 If you want to create a price slider this is very useful because then you can 
 filter the price ([1 TO 20) and nevertheless get the lower and upper bound of 
 the unfiltered price (min=0, max=100):
 {noformat}
 |-[---]--|
 $0 $1 $20$100
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1880) Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY stage gets all fields

2014-02-17 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1880:
---

Attachment: SOLR-1880.patch

Updated to latest trunk. 
Added functional distributed test 
org.apache.solr.handler.component.DistributedQueryComponentOptimizationTest for 
one step pass.
Added trace to return error reason in 
org.apache.solr.client.solrj.impl.HttpSolrServer, otherwise runtime errors hard 
to detect.

 Performance: Distributed Search should skip GET_FIELDS stage if EXECUTE_QUERY 
 stage gets all fields
 ---

 Key: SOLR-1880
 URL: https://issues.apache.org/jira/browse/SOLR-1880
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Shawn Smith
 Attachments: ASF.LICENSE.NOT.GRANTED--one-pass-query-v1.4.0.patch, 
 ASF.LICENSE.NOT.GRANTED--one-pass-query.patch, SOLR-1880.patch


 Right now, a typical distributed search using QueryComponent makes two HTTP 
 requests to each shard:
 # STAGE_EXECUTE_QUERY executes one HTTP request to each shard to get top N 
 ids and sort keys, merges the results to produce a final list of document IDs 
 (PURPOSE_GET_TOP_IDS).
 # STAGE_GET_FIELDS executes a second HTTP request to each shard to get the 
 document field values for the final list of document IDs (PURPOSE_GET_FIELDS).
 If the fl param is just id or just id,score, all document data to 
 return is already fetched by STAGE_EXECUTE_QUERY.  The second 
 STAGE_GET_FIELDS query is completely unnecessary.  Eliminating that 2nd HTTP 
 request can make a big difference in overall performance.
 Also, the fl param only gets id, score and sort columns, it would probably 
 be cheaper to fetch the final sort column data in STAGE_EXECUTE_QUERY which 
 has to read the sort column data anyway, and skip STAGE_GET_FIELDS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3218) Range faceting support for CurrencyField

2014-02-16 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-3218:
---

Attachment: SOLR-3218.patch

Updated to latest trunk. Added range facet tests to 
org.apache.solr.schema.AbstractCurrencyFieldTest, Moved 
org.apache.solr.schema.CurrencyValue back to separate class from nested class 
org.apache.solr.schema.CurrencyField since CurrencyValue used outside in 
org.apache.solr.request.SimpleFacets and other classes. Probably worth for wrap 
and encapsulate in org.apache.solr.schema.CurrencyField

 Range faceting support for CurrencyField
 

 Key: SOLR-3218
 URL: https://issues.apache.org/jira/browse/SOLR-3218
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 4.7

 Attachments: SOLR-3218-1.patch, SOLR-3218-2.patch, SOLR-3218.patch, 
 SOLR-3218.patch, SOLR-3218.patch, SOLR-3218.patch


 Spinoff from SOLR-2202. Need to add range faceting capabilities for 
 CurrencyField



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2014-02-16 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1913:
---

Attachment: SOLR-1913.patch

Changed packages for BitwiseFIlter: org.apache.lucene.search.BitwiseFilter,
for BitwiseQueryParserPlugin: org.apache.solr.search.BitwiseQueryParserPlugin.

Added Lucene tests for BitwiseFilter, added Solr tests  checking bitwise parser 
queries for BitwiseQueryParserPlugin.

 QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
 on Integer Fields
 ---

 Key: SOLR-1913
 URL: https://issues.apache.org/jira/browse/SOLR-1913
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Israel Ekpo
 Fix For: 4.7

 Attachments: SOLR-1913-src.tar.gz, SOLR-1913.bitwise.tar.gz, 
 SOLR-1913.patch, WEB-INF lib.jpg, bitwise_filter_plugin.jar, 
 solr-bitwise-plugin.jar

   Original Estimate: 1h
  Remaining Estimate: 1h

 BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
 allows 
 users to filter the documents returned from a query
 by performing bitwise operations between a particular integer field in the 
 index
 and the specified value.
 This Solr plugin is based on the BitwiseFilter in LUCENE-2460
 See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
 This is the syntax for searching in Solr:
 http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
 op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
 Example :
 http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
 op=AND source=3 negate=true}state:FL
 The negate parameter is optional
 The field parameter is the name of the integer field
 The op parameter is the name of the operation; one of {AND, OR, XOR}
 The source parameter is the specified integer value
 The negate parameter is a boolean indicating whether or not to negate the 
 results of the bitwise operation
 To test out this plugin, simply copy the jar file containing the plugin 
 classes into your $SOLR_HOME/lib directory and then
 add the following to your solrconfig.xml file after the dismax request 
 handler:
 queryParser name=bitwise 
 class=org.apache.solr.bitwise.BitwiseQueryParserPlugin basedOn=dismax /
 Restart your servlet container.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2908) To push the terms.limit parameter from the master core to all the shard cores.

2014-02-14 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-2908:
---

Attachment: SOLR-2908.patch

If you limit terms number you'd need to pass at least sorting to shard in order 
to get most relevant terms (if needed).
Added shards.terms.params.override=true parameter, if terms parameters 
(terms.limit, terms.sort, terms.maxcount, terms.mincount) should be passed to 
shards.
Using this parameter with terms.sort=index (no sorting) is ok, but using 
shards.terms.params.override with terms.sort=count can lead to inconsistent 
results with single core.
See org.apache.solr.handler.component.DistributedTermsComponentParametersTest. 

For example, we use 
{code}shards.terms.params.override=trueterms.limit=5terms.sort=count{code}
and data
{code}index(id, 18, b_t, snake spider shark snail slug seal);
index(id, 19, b_t, snake spider shark snail slug);
index(id, 20, b_t, snake spider shark snail);
index(id, 21, b_t, snake spider shark);
index(id, 22, b_t, snake spider);
index(id, 23, b_t, snake);
index(id, 24, b_t, ant zebra);
index(id, 25, b_t, zebra);
{code}

WIth single core results will be like 
{code}snake=6 spider=5 shark=4 snail=3 slug=2{code}

For 2shards results will be like
shard 1:  {code} snake=3 spider=3 shark=2 snail=2 ant=1 {code}
shard 2: {code} snake=3 spider=2 shark=2 seal=1 slug=1 {code}

Combined result: {code} snake=6 spider=5 shark=4 snail=2 ant=1 {code}

I suggest this parameter override will be useful with sorting and custom 
routing, in case that same terms located on the same shard, 
sorted and limited there correctly.

 To push the terms.limit parameter from the master core to all the shard cores.
 --

 Key: SOLR-2908
 URL: https://issues.apache.org/jira/browse/SOLR-2908
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 1.4.1
 Environment: Linux server. 64 bit processor and 16GB Ram.
Reporter: sivaganesh
Priority: Critical
  Labels: patch
 Fix For: 4.7

 Attachments: SOLR-2908.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 When we pass the terms.limit parameter to the master (which has many shard 
 cores), it's not getting pushed down to the individual cores. Instead the 
 default value of -1 is assigned to Terms.limit parameter is assigned in the 
 underlying shard cores. The issue being the time taken by the Master core to 
 return the required limit of terms is higher when we are having more number 
 of underlying shard cores. This affects the performances of the auto suggest 
 feature. 
 Can thought we can have a parameter to explicitly override the -1 being set 
 to Terms.limit in shards core.
 We saw the source code(TermsComponent.java) and concluded that the same. 
 Please help us in pushing the terms.limit parameter to shard cores. 
 PFB code snippet.
 private ShardRequest createShardQuery(SolrParams params) {
 ShardRequest sreq = new ShardRequest();
 sreq.purpose = ShardRequest.PURPOSE_GET_TERMS;
 // base shard request on original parameters
 sreq.params = new ModifiableSolrParams(params);
 // remove any limits for shards, we want them to return all possible
 // responses
 // we want this so we can calculate the correct counts
 // dont sort by count to avoid that unnecessary overhead on the shards
 sreq.params.remove(TermsParams.TERMS_MAXCOUNT);
 sreq.params.remove(TermsParams.TERMS_MINCOUNT);
 sreq.params.set(TermsParams.TERMS_LIMIT, -1);
 sreq.params.set(TermsParams.TERMS_SORT, TermsParams.TERMS_SORT_INDEX);
 return sreq;
   }
 Solr Version:
 Solr Specification Version: 1.4.0.2010.01.13.08.09.44 
  Solr Implementation Version: 1.5-dev exported - yonik - 2010-01-13 08:09:44 
  Lucene Specification Version: 2.9.1-dev 
  Lucene Implementation Version: 2.9.1-dev 888785 - 2009-12-09 18:03:31 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5589) Disabled replication in config is ignored

2014-01-29 Thread Vitaliy Zhovtyuk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13885556#comment-13885556
 ] 

Vitaliy Zhovtyuk commented on SOLR-5589:


Lets consider the following config:
requestHandler name=/replication class=solr.ReplicationHandler  
lst name=master
str name=replicateAftercommit/str
str name=confFilesschema.xml/str
/lst
lst name=slave
  str name=enablefalse/str
  str name=masterUrlhttp://127.0.0.1:TEST_PORT/solr/str
  str name=pollInterval00:00:01/str
  str name=compressionCOMPRESSION/str
/lst
/requestHandler

Slave is disabled, but master can be used to replicate to separate Solr 
instance.
Therefore i think it's only make sense to disable replication when both master 
and slave explicitly disabled.
And i think it will not have side effects with replication when slave is 
disabled for some reason, but master is replication to separate instance.

 Disabled replication in config is ignored
 -

 Key: SOLR-5589
 URL: https://issues.apache.org/jira/browse/SOLR-5589
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.5
Reporter: alexey
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7

 Attachments: SOLR-5589.patch, SOLR-5589.patch, SOLR-5589.patch, 
 SOLR-5589.patch


 When replication on master node is explicitly disabled in config, it is still 
 enabled after start. This is because when both master and slave 
 configurations are written with enabled=false, replication handler considers 
 this node is a master and enables it. With proposed patch handler will 
 consider this as master node but will disable replication on startup if it is 
 disabled in config (equivalent to disablereplication command).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-01-29 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5561:
---

Attachment: SOLR-5561.patch

Added changes to IndexSchema
Added IndexSchema tests checking discountOverlaps depending on 
luceneMatchVersion

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
  Labels: similarity
 Fix For: 4.7, 4.6.1

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5257) confusing warning logged when unexpected xml attributes are found

2014-01-29 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5257:
---

Attachment: SOLR-5257.patch

Fixed warning messages
I have not added TODOs, but few if's in this class can be replaced with switch 
for clarity: starting at lines 221, 233, 299
Maybe improve this in future.


 confusing warning logged when unexpected xml attributes are found
 -

 Key: SOLR-5257
 URL: https://issues.apache.org/jira/browse/SOLR-5257
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
Priority: Minor
 Attachments: SOLR-5257.patch


 Brian Robinson on the solr-user list got really confused by this warning 
 message...
 {{Unknown attribute id in add:allowDups}}
 ...the mention of id in that warning was a big red herring that led him to 
 assume something was wrong with the id in his documents, because it's not 
 at all clear that's refering to the xml node id of an unexpected xml 
 attribute (which in this case is allowDups)
 filing this issue so i remembe to fix this warning to be more helpful, and 
 review the rest of the file while i'm at it for other confusing warnings.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5589) Disabled replication in config is ignored

2014-01-28 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5589:
---

Attachment: SOLR-5589.patch

Master and slave considered as disabled in 3cases: not configured master/slave, 
configured but missing enable parameter on master/slave, explicitly defined 
enable=false on master/slave.
In all 3cases when master and slave disabled only default value for 
non-configured ReplicationHandler is master and replicateOnCommit.
Since we disable replication in this case (not configured master and slave or 
omitted enable config) replication stops working.
This is the reason of those tests failure (master and replication expected when 
handler configuration omitted).

I propose to disable replication only in case of explicitly defined 
enable=false for master and slave. 
Attached patch illustrate the idea.

 Disabled replication in config is ignored
 -

 Key: SOLR-5589
 URL: https://issues.apache.org/jira/browse/SOLR-5589
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.5
Reporter: alexey
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7

 Attachments: SOLR-5589.patch, SOLR-5589.patch, SOLR-5589.patch, 
 SOLR-5589.patch


 When replication on master node is explicitly disabled in config, it is still 
 enabled after start. This is because when both master and slave 
 configurations are written with enabled=false, replication handler considers 
 this node is a master and enables it. With proposed patch handler will 
 consider this as master node but will disable replication on startup if it is 
 disabled in config (equivalent to disablereplication command).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5530) SolrJ NoOpResponseParser

2014-01-26 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5530:
---

Attachment: SOLR-5530.patch

Added comments to org.apache.solr.client.solrj.impl.NoOpResponseParser
Added Tests checking NoOpResponseParser
Added test demonstrating how to use NoOpResponseParser in order to construct 
query

 SolrJ NoOpResponseParser
 

 Key: SOLR-5530
 URL: https://issues.apache.org/jira/browse/SOLR-5530
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Reporter: Upayavira
Priority: Minor
 Attachments: PATCH-5530.txt, SOLR-5530.patch


 If you want the raw response string out of SolrJ, the advice seems to be to 
 just use an HttpClient directly. 
 However, sometimes you may have a lot of SolrJ infrastructure already in 
 place to build out queries, etc, so it would be much simpler to just use 
 SolrJ to do the work.
 This patch offers a NoOpResponseParser, which simply puts the entire response 
 into an entry in a NamedList.
 Because the response isn't parsed into a QueryResponse, usage is slightly 
 different:
 HttpSolrServer server = new HttpSolrServer(http://localhost:8983/solr;);
 SolrQuery query = new SolrQuery(*:*);
 QueryRequest req = new QueryRequest(query);
 server.setParser(new NoOpResponseParser());
 NamedListObject resp = server.request(req);
 String responseString = resp.get(response);



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5589) Disabled replication in config is ignored

2014-01-24 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5589:
---

Attachment: SOLR-5589.patch

Added ReplicationHandler tests and configuration.
ReplicationHandler state tested by details request 
(http://slave_host:port/solr/replication?command=details) and by getting 
handler statistic.

 Disabled replication in config is ignored
 -

 Key: SOLR-5589
 URL: https://issues.apache.org/jira/browse/SOLR-5589
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.5
Reporter: alexey
 Fix For: 4.7

 Attachments: SOLR-5589.patch, SOLR-5589.patch


 When replication on master node is explicitly disabled in config, it is still 
 enabled after start. This is because when both master and slave 
 configurations are written with enabled=false, replication handler considers 
 this node is a master and enables it. With proposed patch handler will 
 consider this as master node but will disable replication on startup if it is 
 disabled in config (equivalent to disablereplication command).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5598) LanguageIdentifierUpdateProcessor ignores all but the first value of multiValued string fields

2014-01-22 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5598:
---

Attachment: SOLR-5598.patch

Added content logging in case of non-string value
Added test for multivalue field with 1st value as empty string
Added test for multivalue field with values in 2languages (most en). resolved 
as en.

 LanguageIdentifierUpdateProcessor ignores all but the first value of 
 multiValued string fields
 --

 Key: SOLR-5598
 URL: https://issues.apache.org/jira/browse/SOLR-5598
 Project: Solr
  Issue Type: Bug
  Components: contrib - LangId
Affects Versions: 4.5.1
Reporter: Andreas Hubold
 Fix For: 4.7

 Attachments: SOLR-5598.patch, SOLR-5598.patch


 The LanguageIdentifierUpdateProcessor just uses the first value of the 
 multiValued field to detect the language. 
 Method {{concatFields}} calls {{doc.getFieldValue(fieldName)}} but should 
 instead iterate over {{doc.getFieldValues(fieldName)}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5526) Query parser extends standard cause NPE on Solr startup

2014-01-22 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5526:
---

Attachment: SOLR-5526.patch

minor fixes:
{quote} QParserPlugin.standardPlugins's javadoc needs to point out the 
importance of these names being static  final so people aren't surprised by 
these  tests when new parsers are added in the future.{quote}
Added relevant javadocs.
{quote}
TestStandardQParsers is doing something sufficiently odd that it really needs 
some javadocs explaining why it exists (ie: mention the class loading problems 
associated if there is a standardPlugin that has a non-static, non-final name, 
with an {{@see}} this issue, {{@see QParserPlugin.standardPlugins}}, 
etc...){quote}
Added javadocs
{quote} we should probably make TestStandardQParsers assert that the static  
final name it finds in each class matches the name associated in 
QParserPlugin.standardPlugins.{quote}
Thats actually what TestStandardQParsers does. Unit test takes classes 
registered in QParserPlugin.standardPlugins and ensure that each class has 
final and static NAME field.
Added relevant javadocs to TestStandardQParsers.
{quote} solrconfig-query-parser-init.xml has a cut  paste comment referring to 
an unrelated test.{quote}
Fixed, added relevant comments.

{quote} TestInitQParser should have a javadoc comment explaining what the point 
of the test is{quote}
Fixed, added relevant comments.

{quote}TestInitQParser should actaully do a query using the fail parser 
registered in the config, to help future-proof us against someone unwittingly 
changing the test config in a way that defeats the point of the test.{quote}
This test alerady does query using defType=fail, so i expect this registered 
QParser used and return result.

 Query parser extends standard cause NPE on Solr startup
 ---

 Key: SOLR-5526
 URL: https://issues.apache.org/jira/browse/SOLR-5526
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.5.1, 4.6, 5.0
 Environment: Linux, Java 1.7.0_45
Reporter: Nikolay Khitrin
Priority: Blocker
 Attachments: NPE_load_trace, SOLR-5526-final-names.patch, 
 SOLR-5526-final-names.patch, SOLR-5526-tests.patch, SOLR-5526.patch, 
 SOLR-5526.patch, SOLR-5526.patch


 Adding any custom query parser extending standard one with non-final field 
 {{NAME}} lead to messy {{NullPointerException}} during Solr startup.
 Definition of standard parsers is located in  
 {{QParserPlugin.standardPlugins}} static array. The array contains names from 
 static {{NAME}} fields and classes for each plugin.   
 But all of listed parsers are derived from {{QParserPlugin}}, so we have 
 circular dependency of static fields.
 Normally, class loader start initializing {{QParserPlugin}} before all listed 
 plugins in {{SolrCore.initQParsers}}, and array elements referenced to 
 {{NAME}} plugins' fields are filled properly.
 Custom parsers are instantiated before standard parsers. And when we subclass 
 plugin with non-final {{NAME}} field and add it to Solr via 
 {{solrconfig.xml}}, class loader start loading our class before 
 {{QParserPlugin}}. Because {{QParserPlugin}} is a superclass for plugin, it 
 must be initialized before subclasses, and static dereferencing cause null 
 elements in {{standardPlugins}} array because it filled before {{NAME}} field 
 of loading plugin's superclass.
 How to reproduce:
 # Checkout Solr (trunk or stable)
 # Add the following line to solr/example/solr/collection1/conf/solrconfig.xml
   {{queryParser name=fail class=solr.search.LuceneQParserPlugin/}}
 # Call ant run-example in solr folder
 Possible workarounds:
 * dev-workaround: add {{int workaround = 
 QParserPlugin.standardPlugins.length;}} as a first line to
   {{SolrCore.initQParsers}}
 * user-workaround: add plugin with final {{NAME}} field (edismax) to 
 solrconfig.xml  before subclasses of standard plugins. 
   {{queryParser name=workaround 
 class=solr.search.ExtendedDismaxQParserPlugin/}}
   
 Possible fix:
 Move {{standardPlugins}} to new final class to break circular dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5526) Query parser extends standard cause NPE on Solr startup

2014-01-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5526:
---

Attachment: SOLR-5526-final-names.patch

Missed final NAME field in org.apache.solr.search.SimpleQParserPlugin. Fixed.

 Query parser extends standard cause NPE on Solr startup
 ---

 Key: SOLR-5526
 URL: https://issues.apache.org/jira/browse/SOLR-5526
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.5.1, 4.6, 5.0
 Environment: Linux, Java 1.7.0_45
Reporter: Nikolay Khitrin
Priority: Blocker
 Attachments: NPE_load_trace, SOLR-5526-final-names.patch, 
 SOLR-5526-final-names.patch, SOLR-5526.patch


 Adding any custom query parser extending standard one with non-final field 
 {{NAME}} lead to messy {{NullPointerException}} during Solr startup.
 Definition of standard parsers is located in  
 {{QParserPlugin.standardPlugins}} static array. The array contains names from 
 static {{NAME}} fields and classes for each plugin.   
 But all of listed parsers are derived from {{QParserPlugin}}, so we have 
 circular dependency of static fields.
 Normally, class loader start initializing {{QParserPlugin}} before all listed 
 plugins in {{SolrCore.initQParsers}}, and array elements referenced to 
 {{NAME}} plugins' fields are filled properly.
 Custom parsers are instantiated before standard parsers. And when we subclass 
 plugin with non-final {{NAME}} field and add it to Solr via 
 {{solrconfig.xml}}, class loader start loading our class before 
 {{QParserPlugin}}. Because {{QParserPlugin}} is a superclass for plugin, it 
 must be initialized before subclasses, and static dereferencing cause null 
 elements in {{standardPlugins}} array because it filled before {{NAME}} field 
 of loading plugin's superclass.
 How to reproduce:
 # Checkout Solr (trunk or stable)
 # Add the following line to solr/example/solr/collection1/conf/solrconfig.xml
   {{queryParser name=fail class=solr.search.LuceneQParserPlugin/}}
 # Call ant run-example in solr folder
 Possible workarounds:
 * dev-workaround: add {{int workaround = 
 QParserPlugin.standardPlugins.length;}} as a first line to
   {{SolrCore.initQParsers}}
 * user-workaround: add plugin with final {{NAME}} field (edismax) to 
 solrconfig.xml  before subclasses of standard plugins. 
   {{queryParser name=workaround 
 class=solr.search.ExtendedDismaxQParserPlugin/}}
   
 Possible fix:
 Move {{standardPlugins}} to new final class to break circular dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5526) Query parser extends standard cause NPE on Solr startup

2014-01-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5526:
---

Attachment: SOLR-5526-tests.patch

Test reproducing NPE on Solr start-up
Test checking final and static NAME field for all standard parsers

 Query parser extends standard cause NPE on Solr startup
 ---

 Key: SOLR-5526
 URL: https://issues.apache.org/jira/browse/SOLR-5526
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.5.1, 4.6, 5.0
 Environment: Linux, Java 1.7.0_45
Reporter: Nikolay Khitrin
Priority: Blocker
 Attachments: NPE_load_trace, SOLR-5526-final-names.patch, 
 SOLR-5526-final-names.patch, SOLR-5526-tests.patch, SOLR-5526.patch


 Adding any custom query parser extending standard one with non-final field 
 {{NAME}} lead to messy {{NullPointerException}} during Solr startup.
 Definition of standard parsers is located in  
 {{QParserPlugin.standardPlugins}} static array. The array contains names from 
 static {{NAME}} fields and classes for each plugin.   
 But all of listed parsers are derived from {{QParserPlugin}}, so we have 
 circular dependency of static fields.
 Normally, class loader start initializing {{QParserPlugin}} before all listed 
 plugins in {{SolrCore.initQParsers}}, and array elements referenced to 
 {{NAME}} plugins' fields are filled properly.
 Custom parsers are instantiated before standard parsers. And when we subclass 
 plugin with non-final {{NAME}} field and add it to Solr via 
 {{solrconfig.xml}}, class loader start loading our class before 
 {{QParserPlugin}}. Because {{QParserPlugin}} is a superclass for plugin, it 
 must be initialized before subclasses, and static dereferencing cause null 
 elements in {{standardPlugins}} array because it filled before {{NAME}} field 
 of loading plugin's superclass.
 How to reproduce:
 # Checkout Solr (trunk or stable)
 # Add the following line to solr/example/solr/collection1/conf/solrconfig.xml
   {{queryParser name=fail class=solr.search.LuceneQParserPlugin/}}
 # Call ant run-example in solr folder
 Possible workarounds:
 * dev-workaround: add {{int workaround = 
 QParserPlugin.standardPlugins.length;}} as a first line to
   {{SolrCore.initQParsers}}
 * user-workaround: add plugin with final {{NAME}} field (edismax) to 
 solrconfig.xml  before subclasses of standard plugins. 
   {{queryParser name=workaround 
 class=solr.search.ExtendedDismaxQParserPlugin/}}
   
 Possible fix:
 Move {{standardPlugins}} to new final class to break circular dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5526) Query parser extends standard cause NPE on Solr startup

2014-01-21 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5526:
---

Attachment: SOLR-5526.patch

Combined:
Missed final NAME field fixes. 
Test reproducing NPE on Solr start-up
Test checking final and static NAME field for all standard parsers

 Query parser extends standard cause NPE on Solr startup
 ---

 Key: SOLR-5526
 URL: https://issues.apache.org/jira/browse/SOLR-5526
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.5.1, 4.6, 5.0
 Environment: Linux, Java 1.7.0_45
Reporter: Nikolay Khitrin
Priority: Blocker
 Attachments: NPE_load_trace, SOLR-5526-final-names.patch, 
 SOLR-5526-final-names.patch, SOLR-5526-tests.patch, SOLR-5526.patch, 
 SOLR-5526.patch


 Adding any custom query parser extending standard one with non-final field 
 {{NAME}} lead to messy {{NullPointerException}} during Solr startup.
 Definition of standard parsers is located in  
 {{QParserPlugin.standardPlugins}} static array. The array contains names from 
 static {{NAME}} fields and classes for each plugin.   
 But all of listed parsers are derived from {{QParserPlugin}}, so we have 
 circular dependency of static fields.
 Normally, class loader start initializing {{QParserPlugin}} before all listed 
 plugins in {{SolrCore.initQParsers}}, and array elements referenced to 
 {{NAME}} plugins' fields are filled properly.
 Custom parsers are instantiated before standard parsers. And when we subclass 
 plugin with non-final {{NAME}} field and add it to Solr via 
 {{solrconfig.xml}}, class loader start loading our class before 
 {{QParserPlugin}}. Because {{QParserPlugin}} is a superclass for plugin, it 
 must be initialized before subclasses, and static dereferencing cause null 
 elements in {{standardPlugins}} array because it filled before {{NAME}} field 
 of loading plugin's superclass.
 How to reproduce:
 # Checkout Solr (trunk or stable)
 # Add the following line to solr/example/solr/collection1/conf/solrconfig.xml
   {{queryParser name=fail class=solr.search.LuceneQParserPlugin/}}
 # Call ant run-example in solr folder
 Possible workarounds:
 * dev-workaround: add {{int workaround = 
 QParserPlugin.standardPlugins.length;}} as a first line to
   {{SolrCore.initQParsers}}
 * user-workaround: add plugin with final {{NAME}} field (edismax) to 
 solrconfig.xml  before subclasses of standard plugins. 
   {{queryParser name=workaround 
 class=solr.search.ExtendedDismaxQParserPlugin/}}
   
 Possible fix:
 Move {{standardPlugins}} to new final class to break circular dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5526) Query parser extends standard cause NPE on Solr startup

2014-01-19 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-5526:
---

Attachment: NPE_load_trace

NPE stacktrace during solr load

 Query parser extends standard cause NPE on Solr startup
 ---

 Key: SOLR-5526
 URL: https://issues.apache.org/jira/browse/SOLR-5526
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.5.1, 4.6, 5.0
 Environment: Linux, Java 1.7.0_45
Reporter: Nikolay Khitrin
Priority: Blocker
 Attachments: NPE_load_trace, SOLR-5526-final-names.patch, 
 SOLR-5526.patch


 Adding any custom query parser extending standard one with non-final field 
 {{NAME}} lead to messy {{NullPointerException}} during Solr startup.
 Definition of standard parsers is located in  
 {{QParserPlugin.standardPlugins}} static array. The array contains names from 
 static {{NAME}} fields and classes for each plugin.   
 But all of listed parsers are derived from {{QParserPlugin}}, so we have 
 circular dependency of static fields.
 Normally, class loader start initializing {{QParserPlugin}} before all listed 
 plugins in {{SolrCore.initQParsers}}, and array elements referenced to 
 {{NAME}} plugins' fields are filled properly.
 Custom parsers are instantiated before standard parsers. And when we subclass 
 plugin with non-final {{NAME}} field and add it to Solr via 
 {{solrconfig.xml}}, class loader start loading our class before 
 {{QParserPlugin}}. Because {{QParserPlugin}} is a superclass for plugin, it 
 must be initialized before subclasses, and static dereferencing cause null 
 elements in {{standardPlugins}} array because it filled before {{NAME}} field 
 of loading plugin's superclass.
 How to reproduce:
 # Checkout Solr (trunk or stable)
 # Add the following line to solr/example/solr/collection1/conf/solrconfig.xml
   {{queryParser name=fail class=solr.search.LuceneQParserPlugin/}}
 # Call ant run-example in solr folder
 Possible workarounds:
 * dev-workaround: add {{int workaround = 
 QParserPlugin.standardPlugins.length;}} as a first line to
   {{SolrCore.initQParsers}}
 * user-workaround: add plugin with final {{NAME}} field (edismax) to 
 solrconfig.xml  before subclasses of standard plugins. 
   {{queryParser name=workaround 
 class=solr.search.ExtendedDismaxQParserPlugin/}}
   
 Possible fix:
 Move {{standardPlugins}} to new final class to break circular dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org