Hi Evans,
Thanks for your reply.
I would like to show both the ItemId and the ItemName together in the same
JSON output bucket.
Currently I'm only able to show one of them in one bucket. If I want to
show both, it will be show in 2 buckets, like the one below, which will
probably cause the
We bounced ZooKeeper nodes one by one but not change
Since this is our Prod server (100M+ docs) we don¹t want to have to
reindex from scratch (takes 7+ days)
So we¹re considering editing /collections//state.json via
zkcli.sh
Thoughts?
-Frank
On 1/17/17, 5:49 PM, "Pushkar Raste"
Try bouncing the overseer for your cluster.
On Jan 17, 2017 12:01 PM, "Kelly, Frank" wrote:
> Solr Version: 5.3.1
>
> Configuration: 3 shards, 3 replicas each
>
> After running out of heap memory recently (cause unknown) we’ve been
> successfully restarting nodes to
Hi Guys
Just a quick question on search, which in not related to this post:
I have a few cores which is based on a mainframe extract, 1 core per extracted
file which resembles a "DB Table"
The cores are all somehow linked via 1 to many fields, with a structure similar
to a normal ERD
Is it
That's a good point Alex, about indexed vs stored. Since all my queries are
exact match, I can just have them stored=false to save space. I believe
that helps since there are billions of rows and it'll hopefully save on
quite some of space.
But nothing can be done for squeezing dates in same
On 16 January 2017 at 00:54, map reduced wrote:
> some way to squeeze timestamps in single
> document so that it doesn't increase the number of document by a lot and I
> am still able to range query on 'ts'.
Would DateRangeField be useful here?
Anyone has any idea?
On Sun, Jan 15, 2017 at 9:54 PM, map reduced wrote:
> I may have used wrong terminology, by complex types I meant non-primitive
> types. Mutlivalued can be conceptualized as a list of values for instance
> in your example myint = [ 32, 77] etc which you
This sounds a lot like SOLR-4489. However it looks like this was fixed prior
to you version (4.5). So it could be you found another case where this bug
still exists.
The other thing is the default Query Converter cannot handle all cases, and it
could be the query you are sending is beyond
Jimi,
Generally speaking, spellcheck does not work well against fields with stemming,
or other "heavy" analysis. I would to a field that is tokenized
on whitespace with little else, and use that field for spellcheck.
By default, the spellchecker does not suggest for words in the index. So
Solr Version: 5.3.1
Configuration: 3 shards, 3 replicas each
After running out of heap memory recently (cause unknown) we've been
successfully restarting nodes to recover.
Finally we did one restart and one of the nodes now says the following
2017-01-17 16:57:16.835 ERROR (qtp1395089624-17)
Did you solve the problem? I'm stuck with exactly the same problem now. Let
me know if you already had a solution,please.
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-data-import-handler-delta-import-due-to-use-of-multiple-datasource-tp4093698p4314273.html
On Mon, Jan 16, 2017 at 2:58 PM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> I have been using JSON Facet, but I am facing some constraints in
> displaying the field.
>
> For example, I have 2 fields, itemId and itemName. However, when I do the
> JSON Facet, I can only get it to
While indexing a large number of records in Solr Cloud 6.3.0 with a 5
node configuration, I received an error. I'm using java code / solrj to
perform the indexing by creating a list of SolrInputDocuments, 1000 at a
time, and then calling CloudSolrClient.add(list). The records are small
-
Shawn Heisey wrote
> If the data for a field in the results comes from docValues instead of
> stored fields, I don't think it is compressed, which hopefully means
> that if a field is NOT requested, the corresponding docValues data is
> never read.
I think we need to make a consideration
14 matches
Mail list logo