fore the response is supplied to the client, as you can
with say MongoDB replicas.
On Friday, October 21, 2016 1:18 AM, Garth Grimm
<garthgr...@averyranchconsulting.com> wrote:
No matter where you send the update to initially, it will get sent to the
leader of the sha
of the solr instances,does it automatically load
balance between the replicas?
Or do I have to hit each instance in a round robin way and have the load
balanced through the code?
Please advise the best way to do so..
Thank you very much again..
On Fri, Oct 21, 2016 at 9:18 AM, Garth Grimm
the document update.
In general, Zookeeper really only provides the cloud configuration information
once (at most) during all the updates, the actual document update only gets
sent to solr nodes. There's definitely no need to distribute load between
zookeepers for this situation.
Regards,
Garth Grimm
-
Have you evaluated whether the "mm" parameter might help?
https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser#TheDisMaxQueryParser-Themm(MinimumShouldMatch)Parameter
-Original Message-
From: preeti kumari [mailto:preeti.bg...@gmail.com]
Sent: Friday, September 23,
Both.
One shard will have roughly half the documents, and the indices built from
them; the other shard will have the other half of the documents, and the
indices built from those.
There won't be one location that contains all the documents, nor all the
indices.
-Original Message-
I thought that if you start with 3 Zk nodes in the ensemble, and only lose 1,
it will have no effect on indexing at all, since you still have a quorum.
If you lose 2 (which takes you below quorum), then the cloud loses "confidence"
in which solr core is the leader of each shard and stops
Yes.
-Original Message-
From: Yago Riveiro [mailto:yago.rive...@gmail.com]
Sent: Tuesday, December 22, 2015 5:51 AM
To: solr-user@lucene.apache.org
Subject: Indexing using a collection alias
Hi,
It's possible index documents using the alias and not the collection name, if
the alias
Is there really a good reason to consolidate down to a single segment?
Archiving (as one example). Come July 1, the collection for log
entries/transactions in June will never be changed, so optimizing is
actually a good thing to do.
Kind of getting away from OP's question on this, but I don't
Check the firewall settings on the Linux machine.
By default, mine block port 8983, so the request never even gets to Jetty/Solr.
-Original Message-
From: Paden [mailto:rumsey...@gmail.com]
Sent: Monday, June 22, 2015 2:48 PM
To: solr-user@lucene.apache.org
Subject: Connecting to a Solr
Framework way?
Maybe try delving into the log4j framework and modify the log4j.properties
file. You can generate different log files based upon what class generated the
message. Here's an example that I experimented with previously, it generates
an update log, and 2 different query logs with
Yes, it does support POST. As to format, I believe that's handled by the
container. So if you're url-encoding the parameter values, you'll probably
need to set Content-Type: application/x-www-form-urlencoded for the HTTP POST
header.
-Original Message-
From: Steven White
Shawn's explanation fits better with why Websphere and Jetty might behave
differently. But something else that might be happening could be if the DHCP
negotiation causes the IP address to change from one network to another and
back.
-Original Message-
From: Steven White
For updates, the document will always get routed to the leader of the
appropriate shard, no matter what server first receives the request.
-Original Message-
From: Martin de Vries [mailto:mar...@downnotifier.com]
Sent: Thursday, March 05, 2015 4:14 PM
To: solr-user@lucene.apache.org
Well, if you're going to reindex on a newer version, just start out with the
number of shards you feel is appropriate, and reindex.
But yes, if you had 3 shards, wanted to split some of them, you'd really
have to split all of them (making 6), if you wanted the shards to be about
the same size.
You can't just add a new core to an existing collection. You can add the new
node to the cloud, but it won't be part of any collection. You're not going to
be able to just slide it in as a 4th shard to an established collection of 3
shards.
The root of that comes from routing (I'll assume
I see the same issue on 4.10.1.
I’ll open a JIRA if I don’t see one.
I guess the best immediate work around is to copy the unique field, and use
that field for renaming?
On Nov 15, 2014, at 3:18 AM, Suchi Amalapurapu su...@bloomreach.com wrote:
Solr version:4.6.1
On Sat, Nov 15, 2014 at
https://issues.apache.org/jira/browse/SOLR-6744 created.
And hopefully correctly, since that’s my first.
On Nov 15, 2014, at 9:12 AM, Garth Grimm
garthgr...@averyranchconsulting.commailto:garthgr...@averyranchconsulting.com
wrote:
I see the same issue on 4.10.1.
I’ll open a JIRA if I don’t
that can caise issue
with Solr lookup.
I guess I should rephrase my question to ,how to auto generate the unique
keys in the id field when using SolrCloud?
On Nov 12, 2014 7:28 PM, Garth Grimm garthgr...@averyranchconsulting.com
wrote:
You mention you already have a unique Key identified
So it sounds like you’re OK with using the docURL as the unique key for routing
in SolrCloud, but you don’t want to use it as a lookup mechanism.
If you don’t want to do a hash of it and use that unique value in a second
unique field and feed time,
and you can’t seem to find any other field
*
updateRequestProcessorChain name=uuid
processor class=solr.UUIDUpdateProcessorFactory
str name=fieldNameid/str
/processor
processor class=solr.RunUpdateProcessorFactory /
/updateRequestProcessorChain
On Tue, Nov 11, 2014 at 7:47 PM, Garth Grimm
“uuid” isn’t an out of the box field type that I’m familiar with.
Generally, I’d stick with the out of the box advice of the schema.xml file,
which includes things like….
!-- Only remove the id field if you have a very good reason to. While not
strictly
required, it is highly
tried that process before.
On Nov 11, 2014, at 7:39 PM, Garth Grimm
garthgr...@averyranchconsulting.commailto:garthgr...@averyranchconsulting.com
wrote:
“uuid” isn’t an out of the box field type that I’m familiar with.
Generally, I’d stick with the out of the box advice of the schema.xml file
is the city Karlsruhe with 296k inhabitans and
the importance value of 10.
Garth Grimm
garthgr...@averyranchconsulting.commailto:garthgr...@averyranchconsulting.com
schrieb am 16:40 Donnerstag, 16.Oktober 2014:
Spaces should work just fine. Can you show us exactly what is happening
What field(s) auto suggest uses is configurable. So you could create special
fields (and associated ‘copyField’ configs) to populate specific fields for
auto suggest.
For example, you could have 2 fields for “hidden_desc” and “visible_desc”.
Copy field both of them to a field named
Spaces should work just fine. Can you show us exactly what is happening with
the score that leads you to the conclusion that it isn’t working?
Some testing from an example collection I have…
No boost:
Well, the current release is only supported on Linux. A Windows compatible
release is planned for later this year.
-Original Message-
From: Anurag Sharma [mailto:anura...@gmail.com]
Sent: Sunday, October 05, 2014 12:23 PM
To: solr-user@lucene.apache.org
Subject: Re: [ANN] Lucidworks
As a follow-up question on this
One would want to use some kind of load balancing 'above' the SolrCloud
installation for search queries, correct? To ensure that the initial requests
would get distributed evenly to all nodes?
If you don't have that, and send all requests to M2S2 (IRT OP),
,
Garth Grimm
Given a 4 node Solr Cloud (i.e. 2 shards, 2 replicas per shard).
Let's say one node becomes 'nonresponsive'. Meaning sockets get created, but
transactions to them don't get handled (i.e. they time out). We'll also assume
that means the solr instance can't send information out to zookeeper or
client for these internal
requests. CloudSolrServer handles failover for the user (or non internal)
requests. Or you can use your own external load balancer.
- Mark
Cheers,
Tim
On Tue, Nov 19, 2013 at 11:58 AM, Garth Grimm
garthgr...@averyranchconsulting.com wrote:
Given a 4 solr node
But if you're working with multiple configs in zookeeper, be aware that 4.5
currently has an issue creating multiple collections in a cloud that has
multiple configs. It's targeted to be fixed whenever 4.5.1 comes out.
https://issues.apache.org/jira/i#browse/SOLR-5306
-Original
Go to the admin screen for Cloud/Tree, and then click the node for
aliases.json. To the lower right, you should see something like:
{collection:{AdWorksQuery:AdWorks}}
Or access the Zookeeper instance, and do a 'get /aliases.json'.
-Original Message-
From: Christopher Gross
://index1:8080/solr/admin/cores?action=CREATEALIASname=core1collections=core1newshard=shard1
Correct?
-- Chris
On Wed, Oct 16, 2013 at 9:02 AM, Garth Grimm
garthgr...@averyranchconsulting.com wrote:
The alias applies to the entire cloud, not a single core.
So you'd have your indexing
33 matches
Mail list logo