Hey Jaoa!
To also address your second question, the purpose of the normalizers is to
ensure that whatever manipulation you did to your feature values offline at
training time (say to minimize floating point precision roundoff) also get
reflected online at query rerank time, since you will be
Hi Michael,
Using your example, if you have 5 different fields, you could create 5
individual SolrFeatures against those fields. The one tricky thing here is
that you want to use different similarity scoring mechanisms against your
fields. By default, Solr uses a single Similarity class
Hi Jianxiong,
What you say is true. If you want 100 different feature values extracted,
you need to specify 100 different features in the
features.json config so that there is a direct mapping of features in and
features out. However, you more than likely need
to only implement 1 feature class
Hey Vincent,
The feature store and model store are both Solr Managed Resources. To
propagate managed resources in distributed mode, including managed
stopwords and synonyms, you have to issue a collection reload command. The
Solr reference guide of Managed Resources has a bit more on it in the
Hey Saurabh,
So there are a few things you can do to with the LTR plugin and your Solr
collection to solve different degrees of the kind of personalization you
might want. I'll start with the simplest, which isn't exactly what you're
looking for but is very quick to implement and play around
I too have come across this same exact problem. One thing that I have
found is that with autoGeneratePhraseQueries=true, you can find the case
where your index has 'z score' and your query is z-score, but with false it
will not find it. As to your specific problem with the single token zscore
in