This is an automated email from the ASF dual-hosted git repository.
psalagnac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/solr.git
The following commit(s) were added to refs/heads/main by this push:
new 57d879c3d70 SOLR-17929: Remove deprecated overseer work queue (#3702)
57d879c3d70 is described below
commit 57d879c3d70ff680cc52fe97e5a9184f0df051b3
Author: Pierre Salagnac <[email protected]>
AuthorDate: Wed Oct 8 22:08:33 2025 +0200
SOLR-17929: Remove deprecated overseer work queue (#3702)
This removes overseer internal queue at /overseer/queue-work. This queue
was readonly. With the change, we don't try anymore to process left over tasks
from this queue.
This queue was read-only since Solr 8.
---
dev-docs/overseer/overseer.adoc | 7 +-
solr/CHANGES.txt | 2 +
.../src/java/org/apache/solr/cloud/Overseer.java | 93 ++++++------------
.../cloud/api/collections/OverseerStatusCmd.java | 19 +---
.../OverseerCollectionConfigSetProcessorTest.java | 1 -
.../test/org/apache/solr/cloud/OverseerTest.java | 109 +--------------------
.../pages/cluster-node-management.adoc | 1 -
7 files changed, 37 insertions(+), 195 deletions(-)
diff --git a/dev-docs/overseer/overseer.adoc b/dev-docs/overseer/overseer.adoc
index b08aed5faa6..a782e5a0bc8 100644
--- a/dev-docs/overseer/overseer.adoc
+++ b/dev-docs/overseer/overseer.adoc
@@ -146,9 +146,6 @@ Methods for other nodes to get the queues for interacting
with Overseer:
* Collection queue used to send collection API tasks: `OverseerTaskQueue
get**Collection**Queue()`. The queue is in Zookeeper at
`/overseer/collection-queue-work`. Actions are processed by
`<<OverseerCollectionMessageHandler>>`.
* Config set queue used to send configset API tasks: `OverseerTaskQueue
get**ConfigSet**Queue()`. Returns the Collection queue (in Zookeeper at
`/overseer/collection-queue-work`, same queue as the collection queue), actions
expected have a `"configsets:"` prefix and are processed by
`<<OverseerConfigSetMessageHandler>>`.
-The deprecated work queue `ZkDistributedQueue get**InternalWork**Queue()` at
`"/overseer/queue-work"` is used by Overseer to store operations removed from
the state update queue and currently being executed (and was used to manage
Overseer failures). It is no longer used and only supported for migrations from
older versions of SolrCloud (7) to version 8. Should be removed by version 9. +
-Do note though that other queues are sometimes called “work queues” because
work is enqueued into them.
-
Zookeeper backed maps used for tracking async tasks (running, successfully
completed and failed): `get**RunningMap**()`, `get**CompletedMap**()`,
`get**FailureMap**()`. These maps can also be obtained from `ZkController
getOverseerRunningMap()`, `getOverseerCompletedMap()` and
`getOverseerFailureMap()`. +
These maps are updated in `<<OverseerTaskProcessor>>` as tasks are executed
and are used to wait for async requests to complete (for example
`RebalanceLeader.waitAsyncRequests()`) or to get the status of a task (for
example `CollectionHandler.CollectionOperation.REQUESTSTATUS_OP` enum
implementation). +
Note “async id” and “request id” are used interchangeably in the code and
refer to the same thing.
@@ -190,9 +187,9 @@ But… there is a possibility that an Overseer in
`ClusterStateUpdater.run()` wo
A detailed description of the class:
-It first (one time only) builds the cluster state (view of the current state
of all collections, see `org.apache.solr.common.cloud.ClusterState`) by reading
everything from Zookeeper (using
`ZkStateReader.forciblyRefreshAllClusterStateSlow()`), then applying state
messages from the deprecated internal work queue `"/overseer/queue-work"`
(where items unprocessed by a previous Overseer might have been left in
Zookeeper from previous versions of Solr, nothing is enqueued there anymore,
see [...]
+It first (one time only) builds the cluster state (view of the current state
of all collections, see `org.apache.solr.common.cloud.ClusterState`) by reading
everything from Zookeeper (using
`ZkStateReader.forciblyRefreshAllClusterStateSlow()`).
-Once this initial state is built (then updated in Zookeeper if needed but that
no longer happens, see deprecation comment in <<Queues for interacting with
other nodes>>), the `run()` method switches to consume the
`get**StateUpdateQueue**()` at `"/overseer/queue"` and applies changes to
Zookeeper as long as the Overseer is on the current node. Note that items from
the queue are read by batches of max 1000, the changes are batched (max
batching size 10000 see `Overseer.STATE_UPDATE_BATCH_ [...]
+Once this initial state is built, the `run()` method consumes the
`get**StateUpdateQueue**()` at `"/overseer/queue"` and applies changes to
Zookeeper as long as the Overseer is on the current node. Note that items from
the queue are read by batches of max 1000, the changes are batched (max
batching size 10000 see `Overseer.STATE_UPDATE_BATCH_SIZE` and max batching
delay 2 seconds see `ZkStateReader.STATE_UPDATE_DELAY`). More than 1000 items
can be batched if more items are in the queue o [...]
Updates to Zookeeper are done through `ClusterStateUpdater.processQueueItem()`
and `ClusterStateUpdater.processMessage()`. Applying these state changes uses
the `*Mutator` classes (see `<<ClusterStateMutator>>` and friends). The change
only impacts Zookeeper cluster state, not the actual nodes. The Mutator classes
end up returning to `ClusterStateUpdater.processQueueItem()` the write commands
to apply to Zookeeper and those are applied. Processed messages (a.k.a. tasks)
are removed from [...]
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index 5b91accaedc..fe57504ec98 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -264,6 +264,8 @@ Other Changes
overseerEnabled cluster property and an env var SOLR_CLOUD_OVERSEER_ENABLED.
Read more in the
upgrade guide. (David Smiley)
+* SOLR-17929: Remove obsolete overseer internal work queue. This queue was
read-only since Solr 8. (Pierre Salagnac)
+
================== 9.10.0 ==================
New Features
---------------------
diff --git a/solr/core/src/java/org/apache/solr/cloud/Overseer.java
b/solr/core/src/java/org/apache/solr/cloud/Overseer.java
index c6ca1426cdb..5cd17b72a43 100644
--- a/solr/core/src/java/org/apache/solr/cloud/Overseer.java
+++ b/solr/core/src/java/org/apache/solr/cloud/Overseer.java
@@ -174,18 +174,6 @@ public class Overseer implements SolrCloseable {
private final String myId;
// queue where everybody can throw tasks
private final ZkDistributedQueue stateUpdateQueue;
- // TODO remove in 9.0, we do not push message into this queue anymore
- // Internal queue where overseer stores events that have not yet been
published into cloudstate
- // If Overseer dies while extracting the main queue a new overseer will
start from this queue
- private final ZkDistributedQueue workQueue;
- // Internal map which holds the information about running tasks.
- private final DistributedMap runningMap;
- // Internal map which holds the information about successfully completed
tasks.
- private final DistributedMap completedMap;
- // Internal map which holds the information about failed tasks.
- private final DistributedMap failureMap;
-
- private final Stats zkStats;
private SolrMetricsContext clusterStateUpdaterMetricContext;
@@ -202,12 +190,7 @@ public class Overseer implements SolrCloseable {
int minStateByteLenForCompression,
Compressor compressor) {
this.zkClient = reader.getZkClient();
- this.zkStats = zkStats;
- this.stateUpdateQueue = getStateUpdateQueue(zkStats);
- this.workQueue = getInternalWorkQueue(zkClient, zkStats);
- this.failureMap = getFailureMap(zkClient);
- this.runningMap = getRunningMap(zkClient);
- this.completedMap = getCompletedMap(zkClient);
+ this.stateUpdateQueue = getStateUpdateQueue(zkClient, zkStats);
this.myId = myId;
this.reader = reader;
this.minStateByteLenForCompression = minStateByteLenForCompression;
@@ -225,10 +208,6 @@ public class Overseer implements SolrCloseable {
return stateUpdateQueue.getZkStats();
}
- public Stats getWorkQueueStats() {
- return workQueue.getZkStats();
- }
-
@Override
public void run() {
MDCLoggingContext.setNode(zkController.getNodeName());
@@ -245,11 +224,17 @@ public class Overseer implements SolrCloseable {
try {
ZkStateWriter zkStateWriter = null;
ClusterState clusterState = null;
- boolean refreshClusterState = true; // let's refresh in the first
iteration
- // we write updates in batch, but if an exception is thrown when
writing new clusterstate,
+
+ // let's refresh in the first iteration
+ boolean refreshClusterState = true;
+
+ // We write updates in batch, but if an exception is thrown when
writing new ClusteState,
// we do not sure which message is bad message, therefore we will
re-process node one by one
- int fallbackQueueSize = Integer.MAX_VALUE;
- ZkDistributedQueue fallbackQueue = workQueue;
+ // until we processed all messages from the failing batch.
+ // We don't want to process messages one by one when starting a fresh
overseer, so setting
+ // this initially to 0.
+ int fallbackQueueSize = 0;
+
while (!this.isClosed) {
isLeader = amILeader();
if (LeaderStatus.NO == isLeader) {
@@ -268,19 +253,20 @@ public class Overseer implements SolrCloseable {
new ZkStateWriter(reader, stats,
minStateByteLenForCompression, compressor);
refreshClusterState = false;
- // if there were any errors while processing
- // the state queue, items would have been left in the
- // work queue so let's process those first
- byte[] data = fallbackQueue.peek();
+ // if there were any errors while processing the queue, items
would have been left in
+ // the queue with a fallback size greater than 0, so let's
process those first
+ byte[] data = stateUpdateQueue.peek();
while (fallbackQueueSize > 0 && data != null) {
final ZkNodeProps message = ZkNodeProps.load(data);
if (log.isDebugEnabled()) {
log.debug(
"processMessage: fallbackQueueSize: {}, message = {}",
- fallbackQueue.getZkStats().getQueueLength(),
+ stateUpdateQueue.getZkStats().getQueueLength(),
message);
}
try {
+ // force flush to ZK (enableBatching == false) after each
message because there is
+ // no fallback if items are removed from the queue but fail
to be written to ZK
clusterState =
processQueueItem(message, clusterState, zkStateWriter,
false, null);
} catch (Exception e) {
@@ -288,30 +274,29 @@ public class Overseer implements SolrCloseable {
log.warn(
"Exception when process message = {}, consider as bad
message and poll out from the queue",
message);
- fallbackQueue.poll();
+ stateUpdateQueue.poll();
}
throw e;
}
- fallbackQueue.poll(); // poll-ing removes the element we got
by peek-ing
- data = fallbackQueue.peek();
+ stateUpdateQueue.poll(); // poll-ing removes the element we
got by peek-ing
+ data = stateUpdateQueue.peek();
fallbackQueueSize--;
}
// force flush at the end of the loop, if there are no pending
updates, this is a no
// op call
clusterState = zkStateWriter.writePendingUpdates();
- // the workQueue is empty now, use stateUpdateQueue as fallback
queue
- fallbackQueue = stateUpdateQueue;
fallbackQueueSize = 0;
} catch (IllegalStateException e) {
return;
} catch (KeeperException.SessionExpiredException e) {
- log.warn("Solr cannot talk to ZK, exiting Overseer work queue
loop", e);
+ log.warn("Solr cannot talk to ZK, exiting Overseer fallback
queue loop", e);
return;
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
} catch (Exception e) {
- log.error("Exception in Overseer when process message from work
queue, retrying", e);
+ log.error(
+ "Exception in Overseer when process message from fallback
queue, retrying", e);
refreshClusterState = true;
continue;
}
@@ -348,9 +333,7 @@ public class Overseer implements SolrCloseable {
processedNodes.add(head.first());
fallbackQueueSize = processedNodes.size();
- // force flush to ZK after each message because there is no
fallback if workQueue
- // items
- // are removed from workQueue but fail to be written to ZK
+ // Process intra-process messages (in memory messages)
while (unprocessedMessages.size() > 0) {
clusterState = zkStateWriter.writePendingUpdates();
Message m = unprocessedMessages.remove(0);
@@ -878,7 +861,7 @@ public class Overseer implements SolrCloseable {
throw new IllegalStateException(
"Cluster state is done in a distributed way, should not try to
access ZK queue");
}
- return getStateUpdateQueue(new Stats());
+ return getStateUpdateQueue(reader.getZKClient(), new Stats());
}
/**
@@ -887,7 +870,7 @@ public class Overseer implements SolrCloseable {
* other one is not.
*/
ZkDistributedQueue getOverseerQuitNotificationQueue() {
- return getStateUpdateQueue(new Stats());
+ return getStateUpdateQueue(reader.getZKClient(), new Stats());
}
/**
@@ -899,28 +882,8 @@ public class Overseer implements SolrCloseable {
* performed by this queue
* @return a {@link ZkDistributedQueue} object
*/
- ZkDistributedQueue getStateUpdateQueue(Stats zkStats) {
- return new ZkDistributedQueue(
- reader.getZkClient(), "/overseer/queue", zkStats,
STATE_UPDATE_MAX_QUEUE);
- }
-
- /**
- * Internal overseer work queue. This should not be used outside of Overseer.
- *
- * <p>This queue is used to store overseer operations that have been removed
from the state update
- * queue but are being executed as part of a batch. Once the result of the
batch is persisted to
- * zookeeper, these items are removed from the work queue. If the overseer
dies while processing a
- * batch then a new overseer always operates from the work queue first and
only then starts
- * processing operations from the state update queue. This method will
create the /overseer znode
- * in ZooKeeper if it does not exist already.
- *
- * @param zkClient the {@link SolrZkClient} to be used for reading/writing
to the queue
- * @param zkStats a {@link Stats} object which tracks statistics for all
zookeeper operations
- * performed by this queue
- * @return a {@link ZkDistributedQueue} object
- */
- static ZkDistributedQueue getInternalWorkQueue(final SolrZkClient zkClient,
Stats zkStats) {
- return new ZkDistributedQueue(zkClient, "/overseer/queue-work", zkStats);
+ static ZkDistributedQueue getStateUpdateQueue(SolrZkClient zkClient, Stats
zkStats) {
+ return new ZkDistributedQueue(zkClient, "/overseer/queue", zkStats,
STATE_UPDATE_MAX_QUEUE);
}
/* Internal map for failed tasks, not to be used outside of the Overseer */
diff --git
a/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerStatusCmd.java
b/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerStatusCmd.java
index 0edfb81ad98..741a838b7c4 100644
---
a/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerStatusCmd.java
+++
b/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerStatusCmd.java
@@ -61,8 +61,6 @@ import org.slf4j.LoggerFactory;
* <li><b>{@code leader}:</b> {@code ID} of the current overseer leader node
* <li><b>{@code overseer_queue_size}:</b> count of entries in the {@code
/overseer/queue}
* Zookeeper queue/directory
- * <li><b>{@code overseer_work_queue_size}:</b> count of entries in the
{@code
- * /overseer/queue-work} Zookeeper queue/directory
* <li><b>{@code overseer_collection_queue_size}:</b> count of entries in
the {@code
* /overseer/collection-queue-work} Zookeeper queue/directory
* <li><b>{@code overseer_operations}:</b> map (of maps) of success and
error counts for
@@ -128,17 +126,15 @@ import org.slf4j.LoggerFactory;
* <li>{@code remove_event}
* <li>{@code take}
* </ul>
- * <li><b>{@code overseer_internal_queue}:</b> same as above but for queue
{@code
- * /overseer/queue-work}
* <li><b>{@code collection_queue}:</b> same as above but for queue {@code
* /overseer/collection-queue-work}
* </ul>
*
* <p>Maps returned as values of keys in <b>{@code overseer_operations}</b>,
<b>{@code
- * collection_operations}</b>, <b>{@code overseer_queue}</b>, <b>{@code
overseer_internal_queue}</b>
- * and <b>{@code collection_queue}</b> include additional stats. These stats
are provided by {@link
- * MetricUtils}, and represent metrics on each type of operation execution (be
it failed or
- * successful), see calls to {@link Stats#time(String)}. The metric keys are:
+ * collection_operations}</b>, <b>{@code overseer_queue}</b> and <b>{@code
collection_queue}</b>
+ * include additional stats. These stats are provided by {@link MetricUtils},
and represent metrics
+ * on each type of operation execution (be it failed or successful), see calls
to {@link
+ * Stats#time(String)}. The metric keys are:
*
* <ul>
* <li>{@code avgRequestsPerSecond}
@@ -178,16 +174,12 @@ public class OverseerStatusCmd implements
CollApiCmds.CollectionApiCommand {
zkStateReader.getZkClient().getData("/overseer/queue", null, stat, true);
results.add("overseer_queue_size", stat.getNumChildren());
stat = new Stat();
- zkStateReader.getZkClient().getData("/overseer/queue-work", null, stat,
true);
- results.add("overseer_work_queue_size", stat.getNumChildren());
- stat = new Stat();
zkStateReader.getZkClient().getData("/overseer/collection-queue-work",
null, stat, true);
results.add("overseer_collection_queue_size", stat.getNumChildren());
NamedList<Object> overseerStats = new NamedList<>();
NamedList<Object> collectionStats = new NamedList<>();
NamedList<Object> stateUpdateQueueStats = new NamedList<>();
- NamedList<Object> workQueueStats = new NamedList<>();
NamedList<Object> collectionQueueStats = new NamedList<>();
Stats stats = ccc.getOverseerStats();
for (Map.Entry<String, Stats.Stat> entry : stats.getStats().entrySet()) {
@@ -212,8 +204,6 @@ public class OverseerStatusCmd implements
CollApiCmds.CollectionApiCommand {
}
} else if (key.startsWith("/overseer/queue_")) {
stateUpdateQueueStats.add(key.substring(16), lst);
- } else if (key.startsWith("/overseer/queue-work_")) {
- workQueueStats.add(key.substring(21), lst);
} else if (key.startsWith("/overseer/collection-queue-work_")) {
collectionQueueStats.add(key.substring(32), lst);
} else {
@@ -231,7 +221,6 @@ public class OverseerStatusCmd implements
CollApiCmds.CollectionApiCommand {
results.add("overseer_operations", overseerStats);
results.add("collection_operations", collectionStats);
results.add("overseer_queue", stateUpdateQueueStats);
- results.add("overseer_internal_queue", workQueueStats);
results.add("collection_queue", collectionQueueStats);
}
}
diff --git
a/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionConfigSetProcessorTest.java
b/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionConfigSetProcessorTest.java
index abfa0859929..a6b95ae7e7a 100644
---
a/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionConfigSetProcessorTest.java
+++
b/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionConfigSetProcessorTest.java
@@ -586,7 +586,6 @@ public class OverseerCollectionConfigSetProcessorTest
extends SolrTestCaseJ4 {
when(overseerMock.getSolrCloudManager()).thenReturn(cloudManagerMock);
-
when(overseerMock.getStateUpdateQueue(any())).thenReturn(stateUpdateQueueMock);
when(overseerMock.getStateUpdateQueue()).thenReturn(stateUpdateQueueMock);
// Selecting the cluster state update strategy: Overseer when
distributedClusterStateUpdates is
diff --git a/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
b/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
index 770742c4547..d7680eaff7d 100644
--- a/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java
@@ -49,7 +49,6 @@ import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.solr.SolrTestCaseJ4;
import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.cloud.DistributedQueue;
import org.apache.solr.client.solrj.cloud.SolrCloudManager;
import org.apache.solr.client.solrj.impl.CloudHttp2SolrClient;
import org.apache.solr.client.solrj.impl.Http2SolrClient;
@@ -1134,7 +1133,7 @@ public class OverseerTest extends SolrTestCaseJ4 {
"1",
"createNodeSet",
"");
- ZkDistributedQueue workQueue = Overseer.getInternalWorkQueue(zkClient,
new Stats());
+ ZkDistributedQueue workQueue = Overseer.getStateUpdateQueue(zkClient,
new Stats());
workQueue.offer(badMessage);
overseerClient = electNewOverseer(server.getZkAddress());
@@ -1550,112 +1549,6 @@ public class OverseerTest extends SolrTestCaseJ4 {
}
}
- @Test
- public void testReplay() throws Exception {
-
- SolrZkClient overseerClient = null;
- ZkStateReader reader = null;
-
- try {
-
- ZkController.createClusterZkNodes(zkClient);
-
- reader = new ZkStateReader(zkClient);
- reader.createClusterStateWatchersAndUpdate();
- // prepopulate work queue with some items to emulate previous overseer
died before persisting
- // state
- DistributedQueue queue = Overseer.getInternalWorkQueue(zkClient, new
Stats());
-
- zkClient.makePath(ZkStateReader.COLLECTIONS_ZKNODE + "/" + COLLECTION,
true);
-
- ZkNodeProps m =
- new ZkNodeProps(
- Overseer.QUEUE_OPERATION,
- CollectionParams.CollectionAction.CREATE.toLower(),
- "name",
- COLLECTION,
- ZkStateReader.REPLICATION_FACTOR,
- "1",
- ZkStateReader.NUM_SHARDS_PROP,
- "1",
- "createNodeSet",
- "");
- queue.offer(m);
- m =
- new ZkNodeProps(
- Overseer.QUEUE_OPERATION,
- OverseerAction.STATE.toLower(),
- ZkStateReader.NODE_NAME_PROP,
- "127.0.0.1:8983_solr",
- ZkStateReader.SHARD_ID_PROP,
- "shard1",
- ZkStateReader.COLLECTION_PROP,
- COLLECTION,
- ZkStateReader.CORE_NAME_PROP,
- "core1",
- ZkStateReader.STATE_PROP,
- Replica.State.RECOVERING.toString());
- queue.offer(m);
- m =
- new ZkNodeProps(
- Overseer.QUEUE_OPERATION,
- OverseerAction.STATE.toLower(),
- ZkStateReader.NODE_NAME_PROP,
- "node1:8983_",
- ZkStateReader.SHARD_ID_PROP,
- "shard1",
- ZkStateReader.COLLECTION_PROP,
- COLLECTION,
- ZkStateReader.CORE_NAME_PROP,
- "core2",
- ZkStateReader.STATE_PROP,
- Replica.State.RECOVERING.toString());
- queue.offer(m);
-
- overseerClient = electNewOverseer(server.getZkAddress());
-
- // submit to proper queue
- queue = getOverseerZero().getStateUpdateQueue();
- m =
- new ZkNodeProps(
- Overseer.QUEUE_OPERATION,
- OverseerAction.STATE.toLower(),
- ZkStateReader.NODE_NAME_PROP,
- "127.0.0.1:8983_solr",
- ZkStateReader.SHARD_ID_PROP,
- "shard1",
- ZkStateReader.COLLECTION_PROP,
- COLLECTION,
- ZkStateReader.CORE_NAME_PROP,
- "core3",
- ZkStateReader.STATE_PROP,
- Replica.State.RECOVERING.toString());
- queue.offer(m);
-
- reader.waitForState(
- COLLECTION,
- 1000,
- TimeUnit.MILLISECONDS,
- (liveNodes, collectionState) ->
- collectionState != null
- && collectionState.getSlice("shard1") != null
- && collectionState.getSlice("shard1").getReplicas().size()
== 3);
-
-
assertNotNull(reader.getClusterState().getCollection(COLLECTION).getSlice("shard1"));
- assertEquals(
- 3,
- reader
- .getClusterState()
- .getCollection(COLLECTION)
- .getSlice("shard1")
- .getReplicasMap()
- .size());
- } finally {
- close(overseerClient);
- close(reader);
- }
- }
-
@Test
public void testExternalClusterStateChangeBehavior() throws Exception {
diff --git
a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
index 4fd8678cb30..c5f9b58c9ac 100644
---
a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
+++
b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
@@ -1126,7 +1126,6 @@
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS
"QTime":33},
"leader":"127.0.1.1:8983_solr",
"overseer_queue_size":0,
- "overseer_work_queue_size":0,
"overseer_collection_queue_size":2,
"overseer_operations":[
"createcollection",{