Thanks Alex/Shawn,
Yeah currently we handling by writing some custom code from the response
and calculating the assets, but we lossing the power of default stats and
facet features when going with this approach.
Also actually it's not duplicate data, but as per our current design the
data resides
If the duplicate data is only indexed, it is not actually duplicated. It is
only an index entry and the record ids where it shows.
Regards,
Alex
On Thu, Sep 27, 2018, 10:55 AM Balanathagiri Ayyasamypalanivel, <
bala.cit...@gmail.com> wrote:
> Hi Alex, thanks, we have that set up already in p
On 9/27/2018 8:53 AM, Balanathagiri Ayyasamypalanivel wrote:
Thanks Shawn for your prompt response.
Actually we have to filter on the query time while calculate the score.
The challenge here is we should not add the asset and put as static field
in the index time. The asset needs to be calculate
Hi Alex, thanks, we have that set up already in place, we are thinking to
optimize more to resign the data to avoid these duplication.
Regards,
Bala.
On Thu, Sep 27, 2018, 10:31 AM Alexandre Rafalovitch
wrote:
> Well, my feeling is that you are going in the wrong direction. And that
> maybe you
Thanks Shawn for your prompt response.
Actually we have to filter on the query time while calculate the score.
The challenge here is we should not add the asset and put as static field
in the index time. The asset needs to be calculated while query time with
some filters.
Regards,
Bala.
On Thu,
On 9/26/2018 12:46 PM, Balanathagiri Ayyasamypalanivel wrote:
But only draw back here is we have to parse the json to do the sum of the
values, is there any other way to handle this scenario.
Solr cannot do that for you. You could put this in your indexing
software -- add up the numbers and p
Well, my feeling is that you are going in the wrong direction. And that
maybe you need to focus more on separating your - non solr - storage
representation and your - solr - search oriented representation.
E.g. if your issue is storage, maybe you can focus on stored=false
indexed=true approach.
R
Any suggestions?
Regards,
Bala.
On Wed, Sep 26, 2018, 2:46 PM Balanathagiri Ayyasamypalanivel <
bala.cit...@gmail.com> wrote:
> Hi,
>
> Thanks for the reply, actually we are planning to optimize the huge volume
> of data.
>
> For example, in our current system we have as below, so we can do facet
Hi,
Thanks for the reply, actually we are planning to optimize the huge volume
of data.
For example, in our current system we have as below, so we can do facet
pivot or stats to get the sum of asset_td for each acct, but the data
growing lot whenever more asset getting added.
Id | Accts| assetid
On 9/26/2018 12:20 PM, Balanathagiri Ayyasamypalanivel wrote:
Currently I am storing json object type of values in string field in solr.
Using this field, in the code I am parsing json objects and doing sum of
the values under it.
In solr, do we have any option in doing it by default when using
Hi,
Currently I am storing json object type of values in string field in solr.
Using this field, in the code I am parsing json objects and doing sum of
the values under it.
In solr, do we have any option in doing it by default when using the json
object field values.
Regards,
Bala.
11 matches
Mail list logo