Hi Kaya,
Thanks for bringing it up. The topic is already being discussed by developers,
so expect to see some change in this area; Not over-night, but incremental.
Also, if you want to lend a helping hand, patches are more than welcome as
always.
Jan
> 17. jun. 2020 kl. 04:22 skrev Kayak28 :
>
Hello, Community:
As the Github and Python will replace terminologies that relative to
slavery,
why don't we replace master-slave for Solr as well?
https://developers.srad.jp/story/18/09/14/0935201/
https://developer-tech.com/news/2020/jun/15/github-replace-slavery-terms-master-whitelist/
--
S
It Depends (tm).
As of Solr 7.5, optimize is different. See:
https://lucidworks.com/post/solr-and-optimizing-your-index-take-ii/
So, assuming you have _not_ specified maxSegments=1, any very large
segment (near 5G) that has _zero_ deleted documents won’t be merged.
So there are two scenarios:
For a full forced merge (mistakenly named “optimize”), the worst case disk space
is 3X the size of the index. It is common to need 2X the size of the index.
When I worked on Ultraseek Server 20+ years ago, it had the same merge behavior.
I implemented a disk space check that would refuse to merge
I cant give you a 100% true answer but ive experienced this, and what
"seemed" to happen to me was that the optimize would start, and that will
drive the size up by 3 fold, and if you out of disk space in the process
the optimize will quit since, it cant optimize, and leave the live index
pieces in
Can anybody shed some light on this? If not, I'm going to report it as a
bug in JIRA.
Thomas
Op za 13 jun. 2020 13:37 schreef Thomas Corthals :
> Hi
>
> I'm seeing different ordering on the spellcheck suggestions in cloud mode
> when using spellcheck.extendedResults=false vs.
> spellcheck.extend
when optimize command is issued, the expectation after the completion of
optimization process is that the index size either decreases or at most remain
same. In solr 7.6 cluster with 50 plus shards, when optimize command is issued,
some of the shard's transient or older segment files are not de
So entire cluster was down. I m trying to bring node by node . I restarted
the first node . The solr comes up but add replicas command fails. And then
I tried to check clusterstatus api, it showed shard in active state, no
core as active i.e. all down and one live node which was the one that I
rest
Do you have another host with replica alive or are all replicas on the host
that is down?
Are all SolrCloud hosts in the same ZooKeeper?
> Am 16.06.2020 um 19:29 schrieb Vishal Vaibhav :
>
> Hi thanks . My solr is running in kubernetes. So host name goes away with
> the pod going search-rules-
Hi thanks . My solr is running in kubernetes. So host name goes away with
the pod going search-rules-solr-v1-2.search-rules-solr-v1.search-digital.s
vc.cluster.local
So in my case the pod with this host has gone and also the hostname
search-rules-solr-v1-2.search-rules-solr-v1.search-digital.svc.
me personally, around 290gb. as much as we could shove into them
On Tue, Jun 16, 2020 at 12:44 PM Erick Erickson
wrote:
> How much physical RAM? A rule of thumb is that you should allocate no more
> than 25-50 percent of the total physical RAM to Solr. That's cumulative,
> i.e. the sum of the h
Ok, I see the disconnect... Necessary parts if the index are read from disk
lazily. So your newSearcher or firstSearcher query needs to do whatever
operation causes the relevant parts of the index to be read. In this case,
probably just facet on all the fields you care about. I'd add sorting too
if
David –
It’s fine to take this conversation back to the mailing list. Thank you very
much again for your suggestions.
I think you are correct. It doesn’t appear necessary to set termOffsets, and
it appears that that the unified highlighter is using the TERM_VECTORS offset
source if I don’t t
How much physical RAM? A rule of thumb is that you should allocate no more
than 25-50 percent of the total physical RAM to Solr. That's cumulative,
i.e. the sum of the heap allocations across all your JVMs should be below
that percentage. See Uwe Schindler's mmapdirectiry blog...
Shot in the dark.
Hi@all,
I'm using Solr 6.6 and trying to validate my setup for AtomicUpdates and
Near-Realtime-Search.
Some questions are bogging my mind, so maybe someone can give me a hint
to make things clearer.
I am posting regular updates to a collection using the UpdateHandler and
Solr Command Syntax,
Disclaimer: My background is Windows desktop and AD management and PowerShell.
I have no experience with Solr and only very limited experience with Java, so
please be patient with me.
I have inherited a Solr 7.2.1 setup (on Windows), and I'm trying to figure it
out so that it can be migrate
The way I read the stack trace you provided, it looks like DIH is
running the query "select test_field from test_keyspace.test_table
limit 10", but the Cassandra jdbc driver is reporting that Cassandra
doesn't support some aspect of that query. If I'm reading that right,
this seems like a question
To add to this, i generally have solr start with this:
-Xms31000m-Xmx31000m
and the only other thing that runs on them are maria db gallera cluster
nodes that are not in use (aside from replication)
the 31gb is not an accident either, you dont want 32gb.
On Tue, Jun 16, 2020 at 11:26 AM Shawn H
Just wanted to close the loop here: Isabelle filed SOLR-14569 for this
and eventually reported there that the problem seems specific to her
custom configuration which specifies a seemingly innocuous
in solrconfig.xml.
See that jira for more detailed explanation (and hopefully a
resolution coming
On 6/11/2020 11:52 AM, Ryan W wrote:
I will check "dmesg" first, to find out any hardware error message.
[1521232.781801] Out of memory: Kill process 117529 (httpd) score 9 or
sacrifice child
[1521232.782908] Killed process 117529 (httpd), UID 48, total-vm:675824kB,
anon-rss:181844kB, file-r
I've been trying to build a query that I can use in newSearcher based off the
information in your previous e-mail. I thought you meant to build a *:* query
as per Query 1 in my previous e-mail but I'm still seeing the first-hit
execution.
Now I'm wondering if you meant to create a *:* query with
I don't see anything related in the solr.log file for the same error. Not
sure if there is anyother place where I can check for this.
Thanks,
On Tue, Jun 16, 2020 at 10:21 AM Shawn Heisey wrote:
> On 6/12/2020 8:38 AM, yaswanth kumar wrote:
> > Using solr 8.2.0 and setup a cloud with 2 nodes. (
Did you try the autowarming like I mentioned in my previous e-mail?
> On Jun 16, 2020, at 10:18 AM, James Bodkin
> wrote:
>
> We've changed the schema to enable docValues for these fields and this led to
> an improvement in the response time. We found a further improvement by also
> switching
Hello,
The patch is to fix the display. It doesn't configure or limit the speed :)
În mar., 16 iun. 2020 la 14:26, Shawn Heisey a scris:
> On 6/14/2020 12:06 AM, Florin Babes wrote:
> > While checking ways to optimize the speed of replication I've noticed
> that
> > the index download speed is
On 6/12/2020 8:38 AM, yaswanth kumar wrote:
Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each
collection)
Enabled basic authentication and gave all access to the admin user
Now trying to use solr cloud backup/restore API, backup is working great,
but when trying to invoke re
We've changed the schema to enable docValues for these fields and this led to
an improvement in the response time. We found a further improvement by also
switching off indexed as these fields are used for faceting and filtering only.
Since those changes, we've found that the first-execution for q
Sure I pasted it below from the solr logfiles..
2020-06-16 14:06:27.000 INFO (qtp1987693491-153) [c:test ]
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections
params={name=test&action=RESTORE&location=/opt/$
2020-06-16 14:06:27.001 ERROR (qtp1987693491-153) [c:test ]
o.a.s.s.Http
Check your “df” parameter in all your handlers in solrconfig.xml.
Second, add "&debug=query” to the query and look at the parsed
return, you’ll probably see something field qualified by “text:….”
Offhand, though, I don’t see where that’s happening in your query.
wait, how are you submitting this
Hello,
I run a kerberized SolrCloud (7.4.0) environment (shipped with Cloudera
6.3.2) and have the problem that all index files for all cores are created
under the same ${solr.solr.home} directory instead of ${solr.core.name} and
thus they are corrupt.
Cloudera uses the HDFSDirectoryFactory by de
Hi Shawn,
I am new to solr and I have set up a cloud cluster of 1 shard and 3
collections one 2 servers. I am facing the same issue. I am using
CloudSolrClient client = new
CloudSolrClient.Builder(zkUrls,Optional.empty()).build(), to create my
client.
and then I fire import command using,
clien
On 6/15/2020 9:04 PM, Vishal Vaibhav wrote:
I am running on solr 8.5. For some reason entire cluster went down. When i
am trying to bring up the nodes,its not coming up. My health check is
on "/solr/rules/admin/system". I tried forcing a leader election but it
dint help.
so when i run the followi
On 6/15/2020 2:52 PM, Deepu wrote:
sample query is
"{!complexphrase inOrder=true}(all_text_txt_enus:\"by\\ test*\") AND
(({!terms f=product_id_l}959945,959959,959960,959961,959962,959963)
AND (date_created_at_rdt:[2020-04-07T01:23:09Z TO *} AND
date_created_at_rdt:{* TO 2020-04-07T01:24:57Z]))"
On 6/15/2020 8:01 AM, Webster Homer wrote:
Only the minus following the parenthesis is treated as a NOT.
Are parentheses special? They're not mentioned in the eDismax documentation.
Yes, parentheses are special to edismax. They are used just like in
math equations, to group and separate thing
On 6/14/2020 12:06 AM, Florin Babes wrote:
While checking ways to optimize the speed of replication I've noticed that
the index download speed is fixed at 5.1 in replication.html. There is a
reason for that? If not, I would like to submit a patch with the fix.
We are using solr 8.3.1.
Looking a
Could you please tell me if I can expand log trace here?
(if I'm trying to do it through solr admin and make root log ALL - it
doesn't help me)
Best regards,
Irina Kamalova
On Mon, 15 Jun 2020 at 10:12, Ирина Камалова
wrote:
> I’m using Solr 7.7.3 and latest Cassandra jdbc driver 1.3.5
>
> I
Have you looked in the Solr logfiles?
> Am 16.06.2020 um 05:46 schrieb yaswanth kumar :
>
> Can anyone here help on the posted question pls??
>
>> On Fri, Jun 12, 2020 at 10:38 AM yaswanth kumar
>> wrote:
>>
>> Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each
>> collecti
36 matches
Mail list logo