Re: How to create my schema and add document, thank you
Thanks. I have moved this to the next stage: 1. 3 fields are to be extracted from the raw files, the location of the raw file is the fourth field, all the 4 fields become a document 2. Only the 3 fields will be index'ed? 3. The search result should be kind of re-formatted to include the 3 fields and the content by parsing the location and retrieve the original raw data's content. Current challenge is: the raw data is zipped, I am not sure what can I do to process the zipped file, can Solr handle that? My plan is below: AJava (?) based website will be created, a GUI is created for accepting user's keyword, the keyword will be used to form a POST url, the url will be used to GET response from Solr, the response will contain the correspondent message location, the java program will fetch the zipped file and unzip it, as one zip file could contain multiple messages, the java program will need to parse out the matched message(s), the parsed matched message(s) will be shown to the end user together with the other meta data (the three index'ed fields). Is this a feasible plan? is there better solution? Thank you very much. ** *Sincerely yours,* *Raymond* On Thu, Apr 5, 2018 at 7:11 AM, Adhyan Arizki wrote: > Raymond, > > 1. Please ensure your Solr instance does indeed load up the correct > managed-schema file. You do not need to create the file, it should have > been created automatically in the newer version of Solr out of the box. you > just need to edit it > 2. Have you reload your instance after you made the modification? > > On Thu, Apr 5, 2018 at 6:56 PM, Raymond Xie wrote: > > > I have the data ready for index now, it is a json file: > > > > {"122": "20180320-08:08:35.038", "49": "VIPER", "382": "0", "151": "1.0", > > "9": "653", "10071": "20180320-08:08:35.088", "15": "JPY", "56": "XSVC", > > "54": "1", "10202": "APMKTMAKING", "10537": "XOSE", "10217": "Y", "48": > > "179492540", "201": "1", "40": "2", "8": "FIX.4.4", "167": "OPT", "421": > > "JPN", "10292": "115", "10184": "3379122", "456": "101", "11210": > > "3379122", "1133": "G", "10515": "178", "10": "200", "11032": > "-1", > > "10436": "20180320-08:08:35.038", "10518": "178", "11": > > "3379122", "75": > > "20180320", "10005": "178", "10104": "Y", "35": "RIO", "10208": > > "APAC.VIPER.OOE", "59": "0", "60": "20180320-08:08:35.088", "528": "P", > > "581": "13", "1": "TEST", "202": "25375.0", "455": "179492540", "55": > > "JNI253D8.OS", "100": "XOSE", "52": "20180320-08:08:35.088", "10241": > > "viperooe", "150": "A", "10039": "viperooe", "39": "A", "10438": > "RIO.4.5", > > "38": "1", "37": "3379122", "372": "D", "660": "102", "44": > "2.0", > > "10066": "20180320-08:08:35.038", "29": "4", "50": "JPNIK01", "22": > "101"} > > > > You can inspect the json here: https://jsonformatter.org/ > > > > I need to create index and enable searching on tags: 37, 75 and 10242 > > (where available, this sample message doesn't have it) > > > > My understanding is I need to create the file managed-schema, I added two > > fields as below: > > > > > multiValued="true"/> > > > stored="false" multiValued="true"/> > > > > Then I go back to Solr Admin, I don't see the two new fields in Schema > > section > > > > Anything I am missing here? and once the two fields are put in the > > managed-schema, can I add the json file through upload in Solr Admin? > > > > Thank you very much. > > > > > > ** > > *Sincerely yours,* > > > > > > *Raymond* > > > > > > -- > > Best regards, > Adhyan Arizki >
Re: Migrating from Solr 6.6 getStatistics() to Solr 7.x
Thank you! On Fri, Apr 6, 2018 at 10:34 PM, Chris Hostetter wrote: > > : In my Solr 6.6 based code, I have the following line that get the total > : number of documents in a collection: > : > : totalDocs=indexSearcher.getStatistics().get("numDocs")) > ... > : With Solr 7.2.1, 'getStatistics' is no longer available, and it seems > that > : it is replaced by 'collectionStatistics' or 'termStatistics': > ... > : So my questions is what is the equivalent statement in solr 7.2.1? Is it: > : > : solrIndexSearcher.collectionStatistics("numDocs").maxDoc(); > > Uh... no. that's not quite true. > > In the 6.x code line, getStatistics() was part of the SolrInfoMBean API > that SolrIndexSearcher and many other Solr objects implemented... > > http://lucene.apache.org/solr/6_6_0/solr-core/org/apache/ > solr/search/SolrIndexSearcher.html#getStatistics-- > > In 7.0, SolrInfoMBean was replaced with SolrInfoBean as part ofthe switch > over to the new more robust the Metrics API... > > https://lucene.apache.org/solr/guide/7_0/major-changes- > in-solr-7.html#jmx-support-and-mbeans > https://lucene.apache.org/solr/guide/7_0/metrics-reporting.html > http://lucene.apache.org/solr/7_0_0/solr-core/org/apache/ > solr/core/SolrInfoBean.html > > (The collectionStatistics() and termStatistics() methods are lower level > Lucene concepts) > > IIRC The closest 7.x equivilent to "indexSearcher.getStatistics()" is > "indexSearcher.getMetricsSnapshot()" ... but the keys in that map will > have slightly diff/longer names then they did before, you can use > "indexSearcher.getMetricNames()" so see the full list. > > ...but frankly that's all a very comlicated way to get "numDocs" > if you're writting a solr plugin that has direct access to a > SolrIndexSearcher instance ... you can just call > "solrIndexSearcher.numDocs()" method and make your life a lot simpler. > > > > -Hoss > http://www.lucidworks.com/ >
Re: Running an analyzer chain in an update request processor
Thanks, I should have mentioned that I’m doing this in a script URP. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Apr 6, 2018, at 3:06 PM, Steve Rowe wrote: > > Hi Walter, > > I’ve seen Erik Hatcher recommend using the StatelessScriptUpdateProcessor for > this purpose, e.g. on slides 10-11 of > https://www.slideshare.net/erikhatcher/solr-indexing-and-analysis-tricks . > > More info at https://wiki.apache.org/solr/ScriptUpdateProcessor and > https://lucene.apache.org/solr/7_3_0/solr-core/org/apache/solr/update/processor/StatelessScriptUpdateProcessorFactory.html > > > -- > Steve > www.lucidworks.com > >> On Apr 6, 2018, at 5:46 PM, Walter Underwood wrote: >> >> Is there an easy way to define an analyzer chain in schema.xml then run it >> in an update request processor? >> >> I want to run a chain ending in the minhash token filter, then take those >> minhashes, convert them to hex, and put them in a string field. I’d like the >> values stored. >> >> It seems like this could all work in an update request processor. Grab the >> text from one field, run it through the chain, format the output tokens and >> add them to the field for hashes. >> >> wunder >> Walter Underwood >> wun...@wunderwood.org >> http://observer.wunderwood.org/ (my blog) >> >
Re: Running an analyzer chain in an update request processor
Hi Walter, I’ve seen Erik Hatcher recommend using the StatelessScriptUpdateProcessor for this purpose, e.g. on slides 10-11 of https://www.slideshare.net/erikhatcher/solr-indexing-and-analysis-tricks . More info at https://wiki.apache.org/solr/ScriptUpdateProcessor and https://lucene.apache.org/solr/7_3_0/solr-core/org/apache/solr/update/processor/StatelessScriptUpdateProcessorFactory.html -- Steve www.lucidworks.com > On Apr 6, 2018, at 5:46 PM, Walter Underwood wrote: > > Is there an easy way to define an analyzer chain in schema.xml then run it in > an update request processor? > > I want to run a chain ending in the minhash token filter, then take those > minhashes, convert them to hex, and put them in a string field. I’d like the > values stored. > > It seems like this could all work in an update request processor. Grab the > text from one field, run it through the chain, format the output tokens and > add them to the field for hashes. > > wunder > Walter Underwood > wun...@wunderwood.org > http://observer.wunderwood.org/ (my blog) >
Running an analyzer chain in an update request processor
Is there an easy way to define an analyzer chain in schema.xml then run it in an update request processor? I want to run a chain ending in the minhash token filter, then take those minhashes, convert them to hex, and put them in a string field. I’d like the values stored. It seems like this could all work in an update request processor. Grab the text from one field, run it through the chain, format the output tokens and add them to the field for hashes. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog)
Re: Migrating from Solr 6.6 getStatistics() to Solr 7.x
: In my Solr 6.6 based code, I have the following line that get the total : number of documents in a collection: : : totalDocs=indexSearcher.getStatistics().get("numDocs")) ... : With Solr 7.2.1, 'getStatistics' is no longer available, and it seems that : it is replaced by 'collectionStatistics' or 'termStatistics': ... : So my questions is what is the equivalent statement in solr 7.2.1? Is it: : : solrIndexSearcher.collectionStatistics("numDocs").maxDoc(); Uh... no. that's not quite true. In the 6.x code line, getStatistics() was part of the SolrInfoMBean API that SolrIndexSearcher and many other Solr objects implemented... http://lucene.apache.org/solr/6_6_0/solr-core/org/apache/solr/search/SolrIndexSearcher.html#getStatistics-- In 7.0, SolrInfoMBean was replaced with SolrInfoBean as part ofthe switch over to the new more robust the Metrics API... https://lucene.apache.org/solr/guide/7_0/major-changes-in-solr-7.html#jmx-support-and-mbeans https://lucene.apache.org/solr/guide/7_0/metrics-reporting.html http://lucene.apache.org/solr/7_0_0/solr-core/org/apache/solr/core/SolrInfoBean.html (The collectionStatistics() and termStatistics() methods are lower level Lucene concepts) IIRC The closest 7.x equivilent to "indexSearcher.getStatistics()" is "indexSearcher.getMetricsSnapshot()" ... but the keys in that map will have slightly diff/longer names then they did before, you can use "indexSearcher.getMetricNames()" so see the full list. ...but frankly that's all a very comlicated way to get "numDocs" if you're writting a solr plugin that has direct access to a SolrIndexSearcher instance ... you can just call "solrIndexSearcher.numDocs()" method and make your life a lot simpler. -Hoss http://www.lucidworks.com/
Migrating from Solr 6.6 getStatistics() to Solr 7.x
Hi all In my Solr 6.6 based code, I have the following line that get the total number of documents in a collection: totalDocs=indexSearcher.getStatistics().get("numDocs")) where indexSearcher is an instance of "SolrIndexSearcher". With Solr 7.2.1, 'getStatistics' is no longer available, and it seems that it is replaced by 'collectionStatistics' or 'termStatistics': https://lucene.apache.org/solr/7_2_1/solr-core/org/apache/solr/search/SolrIndexSearcher.html?is-external=true So my questions is what is the equivalent statement in solr 7.2.1? Is it: solrIndexSearcher.collectionStatistics("numDocs").maxDoc(); The API warns that it is still experimental and might change in incompatible ways in the next release. Is there more 'stable' code for getting this done? Thanks
Re: Basic Security Plugin and Collection Shard Distribution
As far as logging goes. When setting PKIAuthenticationPlugin, RuleBasedAuthorizationPlugin, and HttpSolrCall to TRACE. The following is all that is seen in the log file of host1 for the above request: 2018-04-06 14:51:34.775 DEBUG (qtp329611835-8790) [ ] o.a.s.s.HttpSolrCall PkiAuthenticationPlugin says authorization required : true 2018-04-06 14:51:34.775 DEBUG (qtp329611835-8790) [ ] o.a.s.s.HttpSolrCall AuthorizationContext : userPrincipal: [[principal: solrreader]] type: [READ], collections: [c2, c2,], Path: [/select] path : /select params :null 2018-04-06 14:51:34.776 INFO (qtp329611835-8790) [ ] o.a.s.s.RuleBasedAuthorizationPlugin This resource is configured to have a permission { "name":"all", "role":"admin"}, The principal [principal: solrreader] does not have the right role 2018-04-06 14:51:34.776 INFO (qtp329611835-8790) [ ] o.a.s.s.HttpSolrCall USER_REQUIRED auth header Basic context : userPrincipal: [[principal: solrreader]] type: [READ], collections: [c2, c2,], Path: [/select] path : /select params :null On Thu, Apr 5, 2018 at 12:02 PM Chris Ulicny wrote: > Hi all, > > I've been periodically running into a strange permissions issues and have > finally some useful information on it. We've run into the issue on v6.3.0 > and v7.X clusters. > > Assume we have 2 hosts (1 instance on each) with 2 collections. Collection > c1 has 2 shards, and collection c2 has 1 shard. Each only has one copy of > each shard. The distribution is as follows: > > host1: c1-shard1 > host2: c1-shard2, c2-shard1 > > We have security enabled on it where the authorization section looks like: > > "authorization":{ > "class":"solr.RuleBasedAuthorizationPlugin", > "permissions":[ > {"name":"read","role":"reader"}, > {"name":"security-read","role":"reader"}, > {"name":"schema-read","role":"reader"}, > {"name":"config-read","role":"reader"}, > {"name":"core-admin-read","role":"reader"}, > {"name":"collection-admin-read","role":"reader"}, > {"name":"update","role":"writer"}, > {"name":"security-edit","role":"admin"}, > {"name":"schema-edit","role":"admin"}, > {"name":"config-edit","role":"admin"}, > {"name":"core-admin-edit","role":"admin"}, > {"name":"collection-admin-edit","role":"admin"}, > {"name":"all","role":"admin"}], > "user-role":{ > "solradmin":["reader","writer","admin"], > "solrreader":["reader"], > "solrwriter":["reader","writer"]}} > > When sending the query http://host1:8983/solr/c2/select?q=*:* as solrreader > or solrwriter a 403 response is returned > > However, when sending the query as solradmin, the expected results are > returned. > > So what are we missing to allow the reader role to query a collection that is > part of the solrcloud instance, but not actually present on the host? > > Thanks, > Chris > > > >
Re: some parent documents
Lucene tends to omit full scans as possible with leap-frogging on skiplists. It will enumerate all *matching* docs O(m) and rank every result with O(log(page size)). ie O(m log p). Early I remember that BJQ enumerated all matching children even most time it's enough find only one, potentially it's a room for improvement but it never be a problem. On Fri, Apr 6, 2018 at 4:15 PM, Arturas Mazeika wrote: > Hi Mikhail et al, > > I must say that this complexity question is still bugging me, and I wonder > if it is possible to get even partial answers in Big-O notation.. > > Say that we have N (for example 10^6) documents, each having 10 SKUs and > each in turn having 10 storage as well as every product having 10 vendors. > Consider then answer to be 1% large (there are 10 000 documents satisfying > the query). What would be the complexity of answering it? > > Cheers, > Arturas > > On Thu, Apr 5, 2018 at 11:47 AM, Arturas Mazeika > wrote: > > > Hi Mikhail et al, > > > > Thanks a lot for sharing the code snippet. I would not have been able to > > dig this Java file myself to investigate the complexity of the search > > query. Scanning the code I get a feeling that it is well structured and > > well thought of. There is a concept like advance (Parent Approximation) > as > > well as ParentPhaseTwo, matches, matchCost, BlockJoinScorer, Explanation, > > Query rewriting. Is there a documentation available how the architecture > > looks like and what school of thought/doctrine used here? > > > > W.r.t. to my complexity question, I expected to see an answer in the > Big-O > > notation (rather than as Java code). Typically one makes assumptions > there > > about the key parameters (e.g., number of Products to be N_P, number of > > SKUs to be N_Sk, number of storages to be N_St, number of vendors to be > > N_V, JOIN Selectivities (in terms of percentage) be p(P,SK), p(SK,ST), > > p(P,V) between the corresponding entities and computes a formula. > > > > What is the complexity of this query in big-O notation? > > > > Cheers, > > Arturas > > > > > > > > On Wed, Apr 4, 2018 at 6:16 PM, Mikhail Khludnev > wrote: > > > >> > > >> > What's happening under the hood of > >> > solr in answering query [1] from [2]? > >> > >> https://github.com/apache/lucene-solr/blob/master/lucene/ > >> join/src/java/org/apache/lucene/search/join/ToParentBlo > >> ckJoinQuery.java#L178 > >> > >> On Wed, Apr 4, 2018 at 3:39 PM, Arturas Mazeika > >> wrote: > >> > >> > Hi Mikhail et al, > >> > > >> > Thanks a lot for a very thorough answer. This is an impressive piece > of > >> > knowledge you just shared. > >> > > >> > Not surprisingly, I was caught unprepared by the 'v=...' part of the > >> > answer. This brought me to the links you posted (starts with http). > From > >> > those links I went to the more updated link (starts with https), which > >> > brought me to other very resourceful links. Combined with some > >> meditation > >> > session, it came into my mind that it is not possible to express block > >> > queries using mathematical logic only. The format of the input > document > >> is > >> > deeply built into the query expression and answering. Expressing these > >> > queries mathematically / logically may give an impression that solr is > >> > capable of answering (NP-?) hard problems. I have a feeling though > that > >> > solr answers to queries in polynomial (or even almost linear) times. > >> > > >> > Just to connect the remaining dots.. What's happening under the hood > of > >> > solr in answering query [1] from [2]? Is it really so that inverted > >> index > >> > is used to identify the vectors of ids, that are scanned linearly in a > >> hope > >> > to get matches on _root_ and other internal variables? > >> > > >> > [1] q=+{!parent which=type_s:product v=$skuq} +{!parent > >> > which=type_s:product v=$vendorq}&skuq=+COLOR_s:Blue +SIZE_s:XL > +{!parent > >> > which=type_s:sku v='+QTY_i:[10 TO *] +STATE_s:CA'}&vendorq=+NAME_s: > Bob > >> > +PRICE_i:[20 TO 25] > >> > [2] > >> > https://blog.griddynamics.com/searching-grandchildren-and- > >> > siblings-with-solr-block-join/ > >> > > >> > Thanks! > >> > Arturas > >> > > >> > On Wed, Apr 4, 2018 at 12:36 PM, Mikhail Khludnev > >> wrote: > >> > > >> > > q=+{!parent which=ntype:p v='+msg:Hello +person:Arturas'} +{!parent > >> > which= > >> > > ntype:p v='+msg:ciao +person:Vai'} > >> > > > >> > > On Wed, Apr 4, 2018 at 12:19 PM, Arturas Mazeika > > >> > > wrote: > >> > > > >> > > > Hi Mikhail et al, > >> > > > > >> > > > It seems to me that the nested documents must include nodes that > >> encode > >> > > the > >> > > > level of nodes (within the document). Therefore, the minimal > example > >> > must > >> > > > include the node type. Is the following structure sufficient? > >> > > > > >> > > > { > >> > > > "id":1, > >> > > > "ntype":"p", > >> > > > "_childDocuments_": > >> > > > [ > >> > > > {"id":"1_1", "ntype":"c", "person":"Vai", > "time":"3:14", > >> > > > "
Re: some parent documents
Hi Mikhail et al, I must say that this complexity question is still bugging me, and I wonder if it is possible to get even partial answers in Big-O notation.. Say that we have N (for example 10^6) documents, each having 10 SKUs and each in turn having 10 storage as well as every product having 10 vendors. Consider then answer to be 1% large (there are 10 000 documents satisfying the query). What would be the complexity of answering it? Cheers, Arturas On Thu, Apr 5, 2018 at 11:47 AM, Arturas Mazeika wrote: > Hi Mikhail et al, > > Thanks a lot for sharing the code snippet. I would not have been able to > dig this Java file myself to investigate the complexity of the search > query. Scanning the code I get a feeling that it is well structured and > well thought of. There is a concept like advance (Parent Approximation) as > well as ParentPhaseTwo, matches, matchCost, BlockJoinScorer, Explanation, > Query rewriting. Is there a documentation available how the architecture > looks like and what school of thought/doctrine used here? > > W.r.t. to my complexity question, I expected to see an answer in the Big-O > notation (rather than as Java code). Typically one makes assumptions there > about the key parameters (e.g., number of Products to be N_P, number of > SKUs to be N_Sk, number of storages to be N_St, number of vendors to be > N_V, JOIN Selectivities (in terms of percentage) be p(P,SK), p(SK,ST), > p(P,V) between the corresponding entities and computes a formula. > > What is the complexity of this query in big-O notation? > > Cheers, > Arturas > > > > On Wed, Apr 4, 2018 at 6:16 PM, Mikhail Khludnev wrote: > >> > >> > What's happening under the hood of >> > solr in answering query [1] from [2]? >> >> https://github.com/apache/lucene-solr/blob/master/lucene/ >> join/src/java/org/apache/lucene/search/join/ToParentBlo >> ckJoinQuery.java#L178 >> >> On Wed, Apr 4, 2018 at 3:39 PM, Arturas Mazeika >> wrote: >> >> > Hi Mikhail et al, >> > >> > Thanks a lot for a very thorough answer. This is an impressive piece of >> > knowledge you just shared. >> > >> > Not surprisingly, I was caught unprepared by the 'v=...' part of the >> > answer. This brought me to the links you posted (starts with http). From >> > those links I went to the more updated link (starts with https), which >> > brought me to other very resourceful links. Combined with some >> meditation >> > session, it came into my mind that it is not possible to express block >> > queries using mathematical logic only. The format of the input document >> is >> > deeply built into the query expression and answering. Expressing these >> > queries mathematically / logically may give an impression that solr is >> > capable of answering (NP-?) hard problems. I have a feeling though that >> > solr answers to queries in polynomial (or even almost linear) times. >> > >> > Just to connect the remaining dots.. What's happening under the hood of >> > solr in answering query [1] from [2]? Is it really so that inverted >> index >> > is used to identify the vectors of ids, that are scanned linearly in a >> hope >> > to get matches on _root_ and other internal variables? >> > >> > [1] q=+{!parent which=type_s:product v=$skuq} +{!parent >> > which=type_s:product v=$vendorq}&skuq=+COLOR_s:Blue +SIZE_s:XL +{!parent >> > which=type_s:sku v='+QTY_i:[10 TO *] +STATE_s:CA'}&vendorq=+NAME_s:Bob >> > +PRICE_i:[20 TO 25] >> > [2] >> > https://blog.griddynamics.com/searching-grandchildren-and- >> > siblings-with-solr-block-join/ >> > >> > Thanks! >> > Arturas >> > >> > On Wed, Apr 4, 2018 at 12:36 PM, Mikhail Khludnev >> wrote: >> > >> > > q=+{!parent which=ntype:p v='+msg:Hello +person:Arturas'} +{!parent >> > which= >> > > ntype:p v='+msg:ciao +person:Vai'} >> > > >> > > On Wed, Apr 4, 2018 at 12:19 PM, Arturas Mazeika >> > > wrote: >> > > >> > > > Hi Mikhail et al, >> > > > >> > > > It seems to me that the nested documents must include nodes that >> encode >> > > the >> > > > level of nodes (within the document). Therefore, the minimal example >> > must >> > > > include the node type. Is the following structure sufficient? >> > > > >> > > > { >> > > > "id":1, >> > > > "ntype":"p", >> > > > "_childDocuments_": >> > > > [ >> > > > {"id":"1_1", "ntype":"c", "person":"Vai", "time":"3:14", >> > > > "msg":"Hello"}, >> > > > {"id":"1_2", "ntype":"c", "person":"Arturas", "time":"3:14", >> > > > "msg":"Hello"}, >> > > > {"id":"1_3", "ntype":"c", "person":"Vai", "time":"3:15", >> > > > "msg":"Coz Mathias is working on another system- different >> screen."}, >> > > > {"id":"1_4", "ntype":"c", "person":"Vai", "time":"3:15", >> > > > "msg":"It can get annoying"}, >> > > > {"id":"1_5", "ntype":"c", "person":"Arturas", "time":"3:15", >> > > > "msg":"Thank you. this is very nice of you"}, >> > > > {"id":"1_6", "ntype":"c", "person":"Vai", "time":"3:16", >> > > > "msg":"ciao"}, >> > > > {"id":"1_7",
Urgent! How to retrieve the whole message in the Solr search result?
I am using Solr for the following search need: raw data: in FIX format, it's OK if you don't know what it is, treat it as csv with a special delimiter. parsed data: from raw data, all in the same format of a bunch of JSON format with all 100+ fields. Example: Raw data: delimiter is \u001: 8=FIX.4.4 9=653 35=RIO 1=TEST 11=3379122 38=1 44=2.0 39=A 40=2 49=VIPER 50=JPNIK01 54=1 55=JNI253D8.OS 56=XSVC 59=0 75=20180350 100=XOSE 10039=viperooe 10241=viperooe 150=A 372=D 122=20180320-08:08:35.038 10066=20180320-08:08:35.038 10436=20180320-08:08:35.038 202=25375.0 52=20180320-08:08:35.088 60=20180320-08:08:35.088 10071=20180320-08:08:35.088 11210=3379122 37=3379122 10184=3379122 201=1 29=4 10438=RIO.4.5 10005=178 10515=178 10518=178 581=13 660=102 1133=G 528=P 10104=Y 10202=APMKTMAKING 10208=APAC.VIPER.OOE 10217=Y 10292=115 11032=-1 382=0 10537=XOSE 15=JPY 167=OPT 48=179492540 455=179492540 22=101 456=101 151=1.0 421=JPN 10=200 Parsed data: in json: {"122": "20180320-08:08:35.038", "49": "VIPER", "382": "0", "151": "1.0", "9": "653", "10071": "20180320-08:08:35.088", "15": "JPY", "56": "XSVC", "54": "1", "10202": "APMKTMAKING", "10537": "XOSE", "10217": "Y", "48": "179492540", "201": "1", "40": "2", "8": "FIX.4.4", "167": "OPT", "421": "JPN", "10292": "115", "10184": "3379122", "456": "101", "11210": "3379122", "1133": "G", "10515": "178", "10": "200", "11032": "-1", "10436": "20180320-08:08:35.038", "10518": "178", "11": "3379122", *"75": "20180320"*, "10005": "178", "10104": "Y", "35": "RIO", "10208": "APAC.VIPER.OOE", "59": "0", "60": "20180320-08:08:35.088", "528": "P", "581": "13", "1": "TEST", "202": "25375.0", "455": "179492540", "55": "JNI253D8.OS", "100": "XOSE", "52": "20180320-08:08:35.088", "10241": "viperooe", "150": "A", "10039": "viperooe", "39": "A", "10438": "RIO.4.5", "38": "1", *"37": "3379122"*, "372": "D", "660": "102", "44": "2.0", "10066": "20180320-08:08:35.038", "29": "4", "50": "JPNIK01", "22": "101"} The fields used for searching is order_id (tag 37) and trd_date(tag 75). I will create the schema with the two fields added to it At the moment I can get the result by: http://192.168.112.141:8983/solr/fix_messages/select?q=37:3379122 where 37 is the order_id and 3379122 is the value to search in field of "37" The result I get is: { "responseHeader":{ "status":0, "QTime":6, "params":{ "q":"37:3379122"}}, "response":{"numFound":1,"start":0,"docs":[ { "122":["20180320-08:08:35.038"], "49":["VIPER"], "382":[0], "151":[1.0], "9":[653], "10071":["20180320-08:08:35.088"], "15":["JPY"], "56":["XSVC"], "54":[1], "10202":["APMKTMAKING"], I need to show the result like below: 1. the order_id: the term of "order_id" must be displayed instead of its actual tag 37; 2. the trd_date: the term of "trd_date" must be displayed in the result; 3. the whole message: the whole and raw message must be displayed in the result; 4. the two fields of order_id and trd_date must be highlighted. Can anyone tell me how do I do it? Thank you very much in advance. ** *Sincerely yours,* *Raymond*
Re:Support LTR RankQuery with Grouping
Patch has not been merged yet, it is available here: https://github.com/apache/lucene-solr/pull/162 You can try to apply the patch on the current master and see if it fixes. Please let us know if you have any questions. Cheers, Diego From: solr-user@lucene.apache.org At: 04/05/18 05:36:20To: solr-user@lucene.apache.org Subject: Support LTR RankQuery with Grouping I am facing issue with LTR query not supported with grouping. I see the patch for this has been raised here https://issues.apache.org/jira/browse/SOLR-8776 Is it available in solr/master (7.2.2) now? Looks like this patch is not merged yet. - --Ilay -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html