I see, Thanks Susmit.
I hoped there was something simpler, that could just be part of the collections
view we now have in solr 7 admin ui. Or a at least a one stop api call.
I guess this will be added in a later release.
> On Jun 25, 2018, at 11:20 PM, Susmit wrote:
>
> Hi Aroop,
> i created
Hi Aroop,
i created a utility using solrzkclient api to read state.json, enumerated (one)
replica for each shard and used /replication handler for size and added them
up..
Sent from my iPhone
> On Jun 25, 2018, at 7:24 PM, Aroop Ganguly wrote:
>
> Hi Team
>
> I am not sure how to ascertain
Would you mind sharing details on
1. the Solr Cloud setup, how may nodes do you have at your disposal and how
many shards do you have setup ?
2. The indexing technology, what are you using? Core java/.net threads ? Or a
system like spark ?
3. Where do you see the exceptions? The indexer process l
We are currently having problems in out current production setup in solr.
What we currently have is something like this:
- Solr 6.6.3 (cloud mode)
- 10 threads for indexing
- 900k total documents
- 500 documents per batch
So in each thread, the process will call a stored procedure with a lot of
Hi Team
I am not sure how to ascertain the total size of a collection via the Solr UI
on a Solr7+ installation.
The collection is shared and replicated heavily so its tedious to have to look
at each core and figure out the size of the entire collection from this in an
additive way.
Is there an
The site_address field has all the address of United states. Idea is to
build something similar to Google Places autosuggest.
Here's an example query: curl "
http://localhost/solr/addressbook/suggest?suggest.q=1054%20club&wt=json";
Response:
{
"responseHeader": {
"status": 0,
"QTime": 3125,
"par
Basically, we have an environment that has a large number of solr nodes (~100)
and an environment with fewer solr nodes (~10). In the “big” environment, we
have lots of smaller cores (around 3Gb), and in the smaller environment, we
have fewer bigger cores (around 30 Gb). We transfer data betwe
We have a high update rate collection with a lot of replicas. Sometimes after a
config reload, some of the replicas go down (brown in the cloud graph). I got
really tired of fixing the by hand in a 40 node cluster.
I wrote a script to mine those out of clusterstatus and send a request recovery
FYI to all, just as an update, we rebuilt the index in question from
scratch for a second time this weekend and the problem went away on 1 node,
but we were still seeing it on the other node. After restarting the
problematic node, the problem went away. Still makes me a little uneasy as
we weren't
On 6/22/2018 12:14 PM, Matthew Faw wrote:
> So I’ve tried running MIGRATE on solr 7.3.1 using the following parameters:
> 1) “split.key=”
> 2) “split.key=!”
> 3) “split.key=DERP_”
> 4) “split.key=DERP/0!”
>
> For 1-3, I am seeing the same ERRORs you see. For 4, I do not see any ERRORs.
>
> Interes
Hi Shawn,
Thanks for the reply.
If "lucene" is the default query parser, then how can we specify Standard
Query Parser(QP) in the query.
Dismax QP can be specified by defType=dismax and Extended Dismax Qp by
defType=edismax, how about for declaration of Standard QP.
Regards
Kamal
On Wed, Jun 6
On 6/24/2018 7:38 PM, 苗海泉 wrote:
Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluste
Ok. If somebody needs I found solution:
https://github.com/flaxsearch/luwak/issues/173
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 6/25/2018 1:41 AM, Srinivas Muppu (US) wrote:
Is there any possible solution/steps for the moving solr installation setup
from 'E' drive to 'D'-Drive (New Drive) without any impact to the existing
application(it should not create re indexing again)
You started a previous thread on this topic
Brian,
If you are still facing the issue after disabling buffer, kindly shut down
all the nodes at source and then start them again, stale tlogs will start
purging themselves.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
Linke
Hi Rajeswari,
No it is not. Source forwards the update to the Target in classic manner.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
Hi Rainman,
See http://lucene.apache.org/solr/community.html#mailing-lists-irc for
subscription information.
--
Steve
www.lucidworks.com
> On Jun 25, 2018, at 12:07 AM, Rainman Sián wrote:
>
> Hello
>
> I'm Rainman, I have worked with Solr in a couple of projects in the past
> and about to s
First, understand that this list is maintained by volunteers, so
answers aren't guaranteed.
If you require dedicated support there are various organizations that
provide same, but
you'll have to contact them.
That said, the community is quite responsive, just post questions to
solr-user like this
Well, this is a user's list, not a paid support channel. People here
volunteer their time/expertise.
First of all, Solr 4.2 is very old. From what you're showing, you've simply
grown too big for the server and are running into memory issues. Your
choices are:
1> get a bigger machine and allocate m
Can you please update on this?
From: Jagdeeshwar S [mailto:jagdeeshw...@revalsys.com]
Sent: 22 June 2018 10:41
To: 'solr-user@lucene.apache.org'
Cc: 'Raj Samala'
Subject: Solr objects consuming more GC (Garbage collector) on our
application
Hi Support,
We are using Solr 4.2.0 version
Hello
I'm Rainman, I have worked with Solr in a couple of projects in the past
and about to start a new one.
I want to be part of this list and collaborate to the project,
Best regards,
--
Rainman Sián
Hi,
With such a big cluster a lot of things can go wrong and it is hard to give any
answer without looking into it more and understanding your model. I assume that
you are monitoring your system (both Solr/ZK and components that index/query)
so it should be the first thing to look at and see if
Hi Solr Team,
After subscription done with the *solr-user@lucene.apache.org
* sending below issue details again to the
Solr Mailing list. Please help us as earliest.
As part of Solr project installation setup and instances(including
clustered solr, zk services and indexing jobs scheduler services
Thanks Andrea and Erick
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
24 matches
Mail list logo