This is an automated email from the ASF dual-hosted git repository.
dspavlov pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git
The following commit(s) were added to refs/heads/master by this push:
new 87404be1c57 IGNITE-28640 Fix broken and outdated links in Apache
Ignite 2.x documentation (#13087)
87404be1c57 is described below
commit 87404be1c57875ada0c4fe4af42510632908dcb4
Author: ignitetcbot <[email protected]>
AuthorDate: Tue May 5 09:26:11 2026 +0300
IGNITE-28640 Fix broken and outdated links in Apache Ignite 2.x
documentation (#13087)
Co-authored-by: Dmitriy Pavlov <[email protected]>
---
.gitignore | 2 ++
docs/_data/toc.yaml | 10 +++++-----
docs/_docs/SQL/sql-tuning.adoc | 2 +-
docs/_docs/binary-client-protocol/data-format.adoc | 4 ++--
docs/_docs/clustering/baseline-topology.adoc | 2 +-
docs/_docs/data-modeling/data-partitioning.adoc | 2 +-
docs/_docs/data-rebalancing.adoc | 2 +-
docs/_docs/distributed-computing/distributed-computing.adoc | 2 +-
docs/_docs/events/events.adoc | 2 +-
.../ignite-for-spark/ignitecontext-and-rdd.adoc | 2 +-
.../ignite-for-spark/installation.adoc | 2 +-
.../ignite-for-spark/troubleshooting.adoc | 2 +-
docs/_docs/installation/kubernetes/generic-configuration.adoc | 4 ++--
docs/_docs/key-value-api/transactions.adoc | 4 ++--
docs/_docs/machine-learning/machine-learning.adoc | 2 +-
docs/_docs/monitoring-metrics/cluster-id.adoc | 2 +-
docs/_docs/monitoring-metrics/cluster-states.adoc | 2 +-
docs/_docs/monitoring-metrics/new-metrics-system.adoc | 2 +-
docs/_docs/monitoring-metrics/tracing.adoc | 2 +-
docs/_docs/net-specific/asp-net-output-caching.adoc | 4 ++--
docs/_docs/net-specific/asp-net-session-state-caching.adoc | 2 +-
docs/_docs/net-specific/net-deployment-options.adoc | 2 +-
docs/_docs/net-specific/net-java-services-execution.adoc | 2 +-
docs/_docs/net-specific/net-standalone-nodes.adoc | 2 +-
docs/_docs/net-specific/net-troubleshooting.adoc | 8 ++++----
docs/_docs/persistence/change-data-capture.adoc | 2 +-
docs/_docs/persistence/external-storage.adoc | 2 +-
docs/_docs/persistence/native-persistence.adoc | 2 +-
docs/_docs/quick-start/cpp.adoc | 2 +-
docs/_docs/quick-start/dotnet.adoc | 2 +-
docs/_docs/security/security-model.adoc | 2 +-
docs/_docs/services/services.adoc | 4 ++--
docs/_docs/setup.adoc | 2 +-
docs/_docs/snapshots/snapshots.adoc | 2 +-
docs/_docs/sql-reference/ddl.adoc | 2 +-
docs/_docs/sql-reference/operational-commands.adoc | 2 +-
docs/_docs/starting-nodes.adoc | 2 +-
docs/_docs/thin-clients/java-thin-client.adoc | 2 +-
docs/_docs/thin-clients/nodejs-thin-client.adoc | 2 +-
docs/_docs/thin-clients/php-thin-client.adoc | 2 +-
40 files changed, 53 insertions(+), 51 deletions(-)
diff --git a/.gitignore b/.gitignore
index f797a6f990a..06151abef99 100644
--- a/.gitignore
+++ b/.gitignore
@@ -95,3 +95,5 @@ modules/ducktests/tests/certs/*
modules/ducktests/tests/ignitetest.egg-info/**
modules/ducktests/tests/build/**
modules/ducktests/tests/dist/**
+/.gigaide/gigaide.properties
+*.bkp
diff --git a/docs/_data/toc.yaml b/docs/_data/toc.yaml
index 75cb35532d9..38d9b63d931 100644
--- a/docs/_data/toc.yaml
+++ b/docs/_data/toc.yaml
@@ -33,7 +33,7 @@
- title: REST API
url: quick-start/restapi
- title: Installation
- url: installation
+ url: installation/installing-using-zip
items:
- title: Installing Using ZIP Archive
url: installation/installing-using-zip
@@ -224,7 +224,7 @@
- title: Calcite-based SQL Engine
url: SQL/sql-calcite
- title: SQL Reference
- url: sql-reference/sql-reference-overview
+ url: sql-reference/index
items:
- title: SQL Conformance
url: sql-reference/sql-conformance
@@ -233,7 +233,7 @@
- title: Data Manipulation Language (DML)
url: sql-reference/dml
- title: Transactions
- url: sql-reference/transactions
+ url: key-value-api/transactions
- title: Operational Commands
url: sql-reference/operational-commands
- title: Aggregate functions
@@ -341,7 +341,7 @@
- title: Stacking
url: machine-learning/ensemble-methods/stacking
- title: Bagging
- url: machine-learning/ensemble-methods/baggin
+ url: machine-learning/ensemble-methods/bagging
- title: Random Forest
url: machine-learning/ensemble-methods/random-forest
- title: Gradient Boosting
@@ -475,7 +475,7 @@
- title: Index Reader
url: tools/index-reader
- title: Security
- url: security
+ url: security/index
items:
- title: Security Model
url: security/security-model
diff --git a/docs/_docs/SQL/sql-tuning.adoc b/docs/_docs/SQL/sql-tuning.adoc
index e5d1d67b6f4..a09ac669c5f 100644
--- a/docs/_docs/SQL/sql-tuning.adoc
+++ b/docs/_docs/SQL/sql-tuning.adoc
@@ -429,7 +429,7 @@ Presently, the cache is unlimited and can occupy as much
RAM as allocated to you
* Set the JVM max heap size equal to the total size of all the data regions
that store caches for which this on-heap row cache is enabled.
-* link:perf-troubleshooting-guide/memory-tuning#java-heap-and-gc-tuning[Tune]
JVM garbage collection accordingly.
+* link:perf-and-troubleshooting/memory-tuning#java-heap-and-gc-tuning[Tune]
JVM garbage collection accordingly.
====
== Using TIMESTAMP instead of DATE
diff --git a/docs/_docs/binary-client-protocol/data-format.adoc
b/docs/_docs/binary-client-protocol/data-format.adoc
index fe4c018ecb4..ea0e0ab15c5 100644
--- a/docs/_docs/binary-client-protocol/data-format.adoc
+++ b/docs/_docs/binary-client-protocol/data-format.adoc
@@ -900,7 +900,7 @@ When this approach is used, COMPACT_FOOTER flag is not set
and the whole object
In this approach, COMPACT_FOOTER flag is set and only field offset sequence is
written to the object footer. In this case client uses schema_id field to
search objects schema in a previously stored meta store to find out fields
order and associate field with its offset.
-If this approach is used, client needs to keep schemas in a special meta store
and send/retrieve them to Ignite servers. See link:check[Binary Types] for
details.
+If this approach is used, client needs to keep schemas in a special meta store
and send/retrieve them to Ignite servers. See
link:binary-client-protocol/binary-type-metadata[Binary Types] for details.
The structure of the schema in this case can be found below:
@@ -1023,7 +1023,7 @@ int typeCode = readByteLittleEndian(in);
int val = readIntLittleEndian(in);
----
-Refer to the link:example[example section] for implementation of `write...()`
and `read..()` methods shown above.
+Refer to the
link:binary-client-protocol/data-format#serialization-and-deserialization-examples[example
section] for implementation of `write...()` and `read..()` methods shown above.
As another example, for String type, the structure would be:
diff --git a/docs/_docs/clustering/baseline-topology.adoc
b/docs/_docs/clustering/baseline-topology.adoc
index 144e41c9073..622c560d6c8 100644
--- a/docs/_docs/clustering/baseline-topology.adoc
+++ b/docs/_docs/clustering/baseline-topology.adoc
@@ -155,5 +155,5 @@ tab:C++[]
You can use the following tools to monitor and/or manage the baseline topology:
* link:tools/control-script[Control Script]
-* link:monitoring-metrics/metrics#monitoring-topology[JMX Beans]
+* link:monitoring-metrics/new-metrics-system#monitoring-topology[JMX Beans]
diff --git a/docs/_docs/data-modeling/data-partitioning.adoc
b/docs/_docs/data-modeling/data-partitioning.adoc
index 37432e7dd49..3b8d16adcb6 100644
--- a/docs/_docs/data-modeling/data-partitioning.adoc
+++ b/docs/_docs/data-modeling/data-partitioning.adoc
@@ -52,7 +52,7 @@ No data exchange happens between the remaining nodes.
TODO:
You can implement a custom affinity function if you want to control the way
data is distributed in the cluster.
-See the link:advanced-topics/affinity-function[Affinity Function] section in
Advanced Topics.
+See the link:data-modeling/data-partitioning#affinity-function[Affinity
Function] section.
////////////////////////////////////////////////////////////////////////////////
diff --git a/docs/_docs/data-rebalancing.adoc b/docs/_docs/data-rebalancing.adoc
index 48568044c72..2a603f8f224 100644
--- a/docs/_docs/data-rebalancing.adoc
+++ b/docs/_docs/data-rebalancing.adoc
@@ -149,4 +149,4 @@ The following table lists the properties of
`IgniteConfiguration` related to reb
== Monitoring Rebalancing Process
-You can monitor the
link:monitoring-metrics/metrics/new-metrics#caches[rebalancing process for
specific caches using JMX].
+You can monitor the link:monitoring-metrics/new-metrics#caches[rebalancing
process for specific caches using JMX].
diff --git a/docs/_docs/distributed-computing/distributed-computing.adoc
b/docs/_docs/distributed-computing/distributed-computing.adoc
index ea2b9e1efb6..04ff69ffb73 100644
--- a/docs/_docs/distributed-computing/distributed-computing.adoc
+++ b/docs/_docs/distributed-computing/distributed-computing.adoc
@@ -311,7 +311,7 @@ If you want to use the key and value objects inside
`IgniteCallable` and `Ignite
In the cases where you do not need to colocate computations with data but
simply want to process all data remotely, you can run local cache queries
inside the `call()` method. Consider the following example.
-Let's say we have a cache that stores information about persons and we want to
calculate the average age of all persons. One way to accomplish this is to run
a link:key-value-api/querying[scan query] that will fetch the ages of all
persons to the local node, where you can calculate the average age.
+Let's say we have a cache that stores information about persons and we want to
calculate the average age of all persons. One way to accomplish this is to run
a link:key-value-api/using-cache-queries[scan query] that will fetch the ages
of all persons to the local node, where you can calculate the average age.
A more efficient way, however, is to avoid network calls to other nodes by
running the query locally on each remote node and aggregating the result on the
local node.
diff --git a/docs/_docs/events/events.adoc b/docs/_docs/events/events.adoc
index cc08413f3ee..9885c286548 100644
--- a/docs/_docs/events/events.adoc
+++ b/docs/_docs/events/events.adoc
@@ -360,7 +360,7 @@ Events related to node validation failures are instances of
the link:{events_url
== Management Task Events
Management task events represent the tasks that are executed by Visor or Web
Console.
-This event type can be used to monitor a
link:security/cluster-monitor-audit[Web Console activity].
+This event type can be used to monitor Web Console activity.
[cols="2,5,3",opts="header"]
|===
diff --git
a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
index cd1d972dee0..bb7a9b3e5c0 100644
---
a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
+++
b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc
@@ -18,7 +18,7 @@
IgniteContext is the main entry point to Spark-Ignite integration. To create
an instance of Ignite context, user must provide an instance of SparkContext
and a closure creating `IgniteConfiguration` (configuration factory). Ignite
context will make sure that server or client Ignite nodes exist in all involved
job instances. Alternatively, a path to an XML configuration file can be passed
to `IgniteContext` constructor which will be used to configure nodes being
started.
-When creating an `IgniteContext` instance, an optional boolean `client`
argument (defaulting to `true`) can be passed to context constructor. This is
typically used in a Shared Deployment installation. When `client` is set to
`false`, context will operate in embedded mode and will start server nodes on
all workers during the context construction. This is required in an Embedded
Deployment installation. See link:ignite-for-spark/installation[Installation]
for information on deployment con [...]
+When creating an `IgniteContext` instance, an optional boolean `client`
argument (defaulting to `true`) can be passed to context constructor. This is
typically used in a Shared Deployment installation. When `client` is set to
`false`, context will operate in embedded mode and will start server nodes on
all workers during the context construction. This is required in an Embedded
Deployment installation. See
link:extensions-and-integrations/ignite-for-spark/installation[Installation]
for i [...]
[CAUTION]
====
diff --git
a/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
b/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
index 730ec590eff..4c6f2c1c180 100644
--- a/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
+++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/installation.adoc
@@ -20,7 +20,7 @@ Shared deployment implies that Apache Ignite nodes are
running independently fro
=== Standalone Deployment
-In the Standalone deployment mode, Ignite nodes should be deployed together
with Spark Worker nodes. Instruction on Ignite installation can be found
link:installation[here]. After you install Ignite on all worker nodes, start a
node on each Spark worker with your config using `ignite.sh` script.
+In the Standalone deployment mode, Ignite nodes should be deployed together
with Spark Worker nodes. Instruction on Ignite installation can be found
link:installation/installing-using-zip[here]. After you install Ignite on all
worker nodes, start a node on each Spark worker with your config using
`ignite.sh` script.
=== Adding Ignite libraries to Spark classpath by default
diff --git
a/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
b/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
index 2072d301b0c..2c9e4cf4ad8 100644
---
a/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
+++
b/docs/_docs/extensions-and-integrations/ignite-for-spark/troubleshooting.adoc
@@ -20,4 +20,4 @@ This will happen if you have created `IgniteContext` in
client mode (which is de
* I am getting `java.lang.ClassNotFoundException`
`org.apache.ignite.logger.java.JavaLoggerFileHandler` when using IgniteContext
-This issue appears when you do not have any loggers included in classpath and
Ignite tries to use standard Java logging. By default Spark loads all user jar
files using separate class loader. Java logging framework, on the other hand,
uses application class loader to initialize log handlers. To resolve this, you
can either add `ignite-log4j2` module to the list of the used jars so that
Ignite would use Log4j2 as a logging subsystem, or alter default Spark
classpath as described link:igni [...]
+This issue appears when you do not have any loggers included in classpath and
Ignite tries to use standard Java logging. By default Spark loads all user jar
files using separate class loader. Java logging framework, on the other hand,
uses application class loader to initialize log handlers. To resolve this, you
can either add `ignite-log4j2` module to the list of the used jars so that
Ignite would use Log4j2 as a logging subsystem, or alter default Spark
classpath as described link:exte [...]
diff --git a/docs/_docs/installation/kubernetes/generic-configuration.adoc
b/docs/_docs/installation/kubernetes/generic-configuration.adoc
index 09abbfe53c7..15c182a73d3 100644
--- a/docs/_docs/installation/kubernetes/generic-configuration.adoc
+++ b/docs/_docs/installation/kubernetes/generic-configuration.adoc
@@ -340,11 +340,11 @@ To scale your StatefulSet, run the following command:
{command} scale sts ignite-cluster --replicas=3 -n ignite
----
-After scaling the cluster,
link:control-script#activation-deactivation-and-topology-management[change the
baseline topology] accordingly.
+After scaling the cluster,
link:tools/control-script#activation-deactivation-and-topology-management[change
the baseline topology] accordingly.
--
-CAUTION: If you reduce the number of nodes by more than the
link:configuring-caches/configuring-backups[number of partition backups], you
may lose data. The proper way to scale down is to redistribute the data after
removing a node by changing the
link:control-script#removing-nodes-from-baseline-topology[baseline topology].
+CAUTION: If you reduce the number of nodes by more than the
link:configuring-caches/configuring-backups[number of partition backups], you
may lose data. The proper way to scale down is to redistribute the data after
removing a node by changing the
link:tools/control-script#removing-nodes-from-baseline-topology[baseline
topology].
== Connecting to the Cluster
diff --git a/docs/_docs/key-value-api/transactions.adoc
b/docs/_docs/key-value-api/transactions.adoc
index 18368bfa640..0a31036770d 100644
--- a/docs/_docs/key-value-api/transactions.adoc
+++ b/docs/_docs/key-value-api/transactions.adoc
@@ -355,8 +355,8 @@ tab:C++[unsupported]
== Monitoring Transactions
-Refer to the
link:monitoring-metrics/metrics#monitoring-transactions[Monitoring
Transactions] section for the list of metrics that expose some
transaction-related information.
+Refer to the
link:monitoring-metrics/new-metrics-system#monitoring-transactions[Monitoring
Transactions] section for the list of metrics that expose some
transaction-related information.
For the information on how to trace transactions, refer to the
link:monitoring-metrics/tracing[Tracing] section.
-You can also use the link:control-script#transaction-management[control
script] to get information about, or cancel, specific transactions being
executed in the cluster.
+You can also use the link:tools/control-script#transaction-management[control
script] to get information about, or cancel, specific transactions being
executed in the cluster.
diff --git a/docs/_docs/machine-learning/machine-learning.adoc
b/docs/_docs/machine-learning/machine-learning.adoc
index 79176c9dd7c..1b123e46d6e 100644
--- a/docs/_docs/machine-learning/machine-learning.adoc
+++ b/docs/_docs/machine-learning/machine-learning.adoc
@@ -46,7 +46,7 @@ Identifying to which category a new observation belongs, on
the basis of a train
*Applicability:* spam detection, image recognition, credit scoring, disease
identification.
-*Algorithms:*
link:machine-learning/binary-classification/logistic-regression[Logistic
Regression], link:machine-learning/binary-classification/linear-svm[Linear SVM
(Support Vector Machine)],
link:machine-learning/binary-classification/knn-classification[k-NN
Classification], link:machine-learning/binary-classification/naive-bayes[Naive
Bayes], link:machine-learning/binary-classification/decision-trees[Decision
Trees], link:machine-learning/binary-classification/random-forest[Random For
[...]
+*Algorithms:*
link:machine-learning/binary-classification/logistic-regression[Logistic
Regression], link:machine-learning/binary-classification/linear-svm[Linear SVM
(Support Vector Machine)],
link:machine-learning/binary-classification/knn-classification[k-NN
Classification], link:machine-learning/binary-classification/naive-bayes[Naive
Bayes], link:machine-learning/binary-classification/decision-trees[Decision
Trees], link:machine-learning/ensemble-methods/random-forest[Random Forest],
[...]
=== Regression
diff --git a/docs/_docs/monitoring-metrics/cluster-id.adoc
b/docs/_docs/monitoring-metrics/cluster-id.adoc
index 26bb5610dd3..4107b60efa2 100644
--- a/docs/_docs/monitoring-metrics/cluster-id.adoc
+++ b/docs/_docs/monitoring-metrics/cluster-id.adoc
@@ -23,7 +23,7 @@ The length of the tag is limited by 280 characters.
You can use the following methods to view the cluster ID and view or change
the cluster tag:
-* Via the link:control-script#cluster-id-and-tag[control script].
+* Via the link:tools/control-script#cluster-id-and-tag[control script].
* JMX Bean:
+
--
diff --git a/docs/_docs/monitoring-metrics/cluster-states.adoc
b/docs/_docs/monitoring-metrics/cluster-states.adoc
index 0fd65c14a10..45ee8f9cfb2 100644
--- a/docs/_docs/monitoring-metrics/cluster-states.adoc
+++ b/docs/_docs/monitoring-metrics/cluster-states.adoc
@@ -46,7 +46,7 @@ DDL or DML statements that modify the data are prohibited as
well.
You can change the cluster state in multiple ways:
-* link:control-script#getting-cluster-state[Control script]:
+* link:tools/control-script#getting-cluster-state[Control script]:
+
[source, shell]
----
diff --git a/docs/_docs/monitoring-metrics/new-metrics-system.adoc
b/docs/_docs/monitoring-metrics/new-metrics-system.adoc
index bcfae02a5f3..0c0a49e204b 100644
--- a/docs/_docs/monitoring-metrics/new-metrics-system.adoc
+++ b/docs/_docs/monitoring-metrics/new-metrics-system.adoc
@@ -274,7 +274,7 @@ It is reused when new entries need to be added to the
storage on subsequent writ
The allocated size is available at the level of data storage, data region, and
cache group metrics.
The metric is called `TotalAllocatedSize`.
-You can also get an estimate of the actual size of data by multiplying the
number of link:memory-centric-storage#data-pages[data pages] in use by the fill
factor. The fill factor is the ratio of the size of data in a page to the page
size, averaged over all pages. The number of pages in use and the fill factor
are available at the level of data <<Data Region Size,region metrics>>.
+You can also get an estimate of the actual size of data by multiplying the
number of link:memory-architecture#data-pages[data pages] in use by the fill
factor. The fill factor is the ratio of the size of data in a page to the page
size, averaged over all pages. The number of pages in use and the fill factor
are available at the level of data <<Data Region Size,region metrics>>.
Add up the estimated size of all data regions to get the estimated total
amount of data on the node.
diff --git a/docs/_docs/monitoring-metrics/tracing.adoc
b/docs/_docs/monitoring-metrics/tracing.adoc
index a16298cf06e..aa107680580 100644
--- a/docs/_docs/monitoring-metrics/tracing.adoc
+++ b/docs/_docs/monitoring-metrics/tracing.adoc
@@ -88,7 +88,7 @@ Enable tracing for a specific API:
./control.sh --tracing-configuration set --scope TX --sampling-rate 1
----
-Refer to the link:control-script#tracing-configuration[Control Script]
sections for the list of all parameters.
+Refer to the link:tools/control-script#tracing-configuration[Control Script]
sections for the list of all parameters.
=== Programmatically
diff --git a/docs/_docs/net-specific/asp-net-output-caching.adoc
b/docs/_docs/net-specific/asp-net-output-caching.adoc
index aaaadc95ffc..a8a2bce789a 100644
--- a/docs/_docs/net-specific/asp-net-output-caching.adoc
+++ b/docs/_docs/net-specific/asp-net-output-caching.adoc
@@ -27,7 +27,7 @@ be shared between web servers.
== Launching Ignite Automatically
To start Ignite automatically for output caching, configure it
-link:net-specific/configuration-options#configure-with-application-or-web-config-files[in
the web.config file via IgniteConfigurationSection]
+link:net-specific/net-configuration-options#configure-with-application-or-web-config-files[in
the web.config file via IgniteConfigurationSection]
[tabs]
--
@@ -90,4 +90,4 @@ tab:web.config[]
The Ignite instance needs to be started before any request is served.
Typically this is done in the `Application_Start` method of the `global.asax`.
-See link:net-specific/deployment-options#asp-net-deployment[ASP.NET
Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
+See link:net-specific/net-deployment-options#asp-net-deployment[ASP.NET
Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
diff --git a/docs/_docs/net-specific/asp-net-session-state-caching.adoc
b/docs/_docs/net-specific/asp-net-session-state-caching.adoc
index 4c3e9d1ed16..c9f3f51335c 100644
--- a/docs/_docs/net-specific/asp-net-session-state-caching.adoc
+++ b/docs/_docs/net-specific/asp-net-session-state-caching.adoc
@@ -78,4 +78,4 @@ for each application via `cacheName` attribute.
|===
For more details on how to start Ignite within an ASP.NET application, refer
to link:net-specific/asp-net-output-caching[ASP.NET Output Caching].
-Also, see link:net-specific/deployment-options#asp-net-deployment[ASP.NET
Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
+Also, see link:net-specific/net-deployment-options#asp-net-deployment[ASP.NET
Deployment] for web deployment specifics related to the `IGNITE_HOME` variable.
diff --git a/docs/_docs/net-specific/net-deployment-options.adoc
b/docs/_docs/net-specific/net-deployment-options.adoc
index 00348a24277..8bc33fa35a1 100644
--- a/docs/_docs/net-specific/net-deployment-options.adoc
+++ b/docs/_docs/net-specific/net-deployment-options.adoc
@@ -57,7 +57,7 @@ tab:MyApp.csproj[]
Ignite.NET supports
link:https://docs.microsoft.com/en-us/dotnet/core/deploying/single-file[single
file deployment] that is available in .NET Core 3 / .NET 5+.
* Use the `IncludeAllContentForSelfExtract` MSBuild property to include jar
files into the single-file bundle, or ship them separately.
-* See xref:net-troubleshooting.adoc#libcoreclr-not-found[Troubleshooting:
DllNotFoundException] for a workaround that is required
+* See
link:net-specific/net-troubleshooting#libcoreclr-not-found[Troubleshooting:
DllNotFoundException] for a workaround that is required
on .NET 5 with some Ignite versions.
Publish command example:
diff --git a/docs/_docs/net-specific/net-java-services-execution.adoc
b/docs/_docs/net-specific/net-java-services-execution.adoc
index ce9ae3029ba..18f61760418 100644
--- a/docs/_docs/net-specific/net-java-services-execution.adoc
+++ b/docs/_docs/net-specific/net-java-services-execution.adoc
@@ -111,6 +111,6 @@ The Java methods are resolved the following way:
Ignite invoke the matched method or throws an exception in case of ambiguity.
* The method return type is ignored, since .NET and Java do not allow
identical methods with different return types.
-See link:net-specific/platform-interoperability[Platform Interoperability,
Type Compatibility section] for details on
+See link:net-specific/net-platform-interoperability[Platform Interoperability,
Type Compatibility section] for details on
method arguments and result mapping. Note, that the `params/varargs` are also
supported, since in .NET and Java these are
syntactic sugar for object arrays.
diff --git a/docs/_docs/net-specific/net-standalone-nodes.adoc
b/docs/_docs/net-specific/net-standalone-nodes.adoc
index 823cccaf1ad..2e33b70f54c 100644
--- a/docs/_docs/net-specific/net-standalone-nodes.adoc
+++ b/docs/_docs/net-specific/net-standalone-nodes.adoc
@@ -96,7 +96,7 @@ via `-Assembly` command line argument or `Ignite.Assembly`
app setting.
The following functionality requires a corresponding assembly to be loaded on
all nodes:
-* ICompute (supports automatic loading, see
link:net-specific/remote-assembly-loading[Remote Assembly Loading])
+* ICompute (supports automatic loading, see
link:net-specific/net-remote-assembly-loading[Remote Assembly Loading])
* Scan Queries with filter
* Continuous Queries with filter
* ICache.Invoke methods
diff --git a/docs/_docs/net-specific/net-troubleshooting.adoc
b/docs/_docs/net-specific/net-troubleshooting.adoc
index d74943b16aa..c1ff5d98057 100644
--- a/docs/_docs/net-specific/net-troubleshooting.adoc
+++ b/docs/_docs/net-specific/net-troubleshooting.adoc
@@ -77,7 +77,7 @@ The `126 ERROR_MOD_NOT_FOUND` code can occur due to missing
dependencies:
=== Java class is not found
Check your the `IGNITE_HOME` environment variable,
`IgniteConfiguration.IgniteHome` and `IgniteConfiguration.JvmClasspath`
properties.
-Refer to link:net-specific/deployment-options[Deployment] section for more
details. ASP.NET/IIS scenarios require additional steps.
+Refer to link:net-specific/net-deployment-options[Deployment] section for more
details. ASP.NET/IIS scenarios require additional steps.
=== Freeze on Ignition.Start
@@ -124,7 +124,7 @@ tab:XML[]
=== Could not load file or assembly 'MyAssembly' or one of its dependencies.
The system cannot find the file specified.
This exception can occur due to missing assemblies on remote nodes.
-See link:net-specific/standalone-nodes#load-user-assemblies[Standalone Nodes:
Loading User Assemblies] for details.
+See link:net-specific/net-standalone-nodes#load-user-assemblies[Standalone
Nodes: Loading User Assemblies] for details.
=== Stack smashing detected: dotnet terminated
@@ -156,7 +156,7 @@ To work around the issue, make sure that child processes
are created either only
For example, when `direct-io` is used, and .NET code requires starting a child
process,
move the process handling logic to Java side and invoke it with
-link:developers-guide/distributed-computing/distributed-computing[Compute]
`ExecuteJavaTask` API.
+link:distributed-computing/distributed-computing[Compute] `ExecuteJavaTask`
API.
Alternatively, use Services API to call Java service from .NET.
=== [[libcoreclr-not-found]] DllNotFoundException: Unable to load shared
library 'libcoreclr.so' or one of its dependencies
@@ -202,4 +202,4 @@ tab:XML[]
<CETCompat>false</CETCompat>
</PropertyGroup>
----
---
\ No newline at end of file
+--
diff --git a/docs/_docs/persistence/change-data-capture.adoc
b/docs/_docs/persistence/change-data-capture.adoc
index 2f0848591fb..1fc27c50a4f 100644
--- a/docs/_docs/persistence/change-data-capture.adoc
+++ b/docs/_docs/persistence/change-data-capture.adoc
@@ -193,4 +193,4 @@ NOTE: There are no guarantees of notifying the CDC consumer
on concurrent cache
== cdc-ext
Ignite extensions project has
link:https://github.com/apache/ignite-extensions/tree/master/modules/cdc-ext[cdc-ext]
module which provides two way to setup cross cluster replication based on CDC.
-Detailed documentation can be found on
link:extensions-and-integrations/change-data-capture-extensions[page].
+Detailed documentation can be found on
link:extensions-and-integrations/change-data-capture/overview[page].
diff --git a/docs/_docs/persistence/external-storage.adoc
b/docs/_docs/persistence/external-storage.adoc
index a7ab74f3cdd..a5b359d7a9e 100644
--- a/docs/_docs/persistence/external-storage.adoc
+++ b/docs/_docs/persistence/external-storage.adoc
@@ -220,5 +220,5 @@ Refer to
link:extensions-and-integrations/cassandra/overview[this documentation
////
== Implementing Custom CacheStore
-See link:advanced-topics/custom-cache-store[Implementing Custom Cache Store].
+See link:persistence/custom-cache-store[Implementing Custom Cache Store].
////
diff --git a/docs/_docs/persistence/native-persistence.adoc
b/docs/_docs/persistence/native-persistence.adoc
index 18f8f677cd8..5ca6e86d623 100644
--- a/docs/_docs/persistence/native-persistence.adoc
+++ b/docs/_docs/persistence/native-persistence.adoc
@@ -346,7 +346,7 @@ This process helps to utilize disk space frugally by
keeping pages in the most u
See the following related documentation:
-*
link:monitoring-metrics/metrics#monitoring-checkpointing-operations[Monitoring
Checkpointing Operations].
+*
link:monitoring-metrics/new-metrics-system#monitoring-checkpointing-operations[Monitoring
Checkpointing Operations].
*
link:persistence/persistence-tuning#adjusting-checkpointing-buffer-size[Adjusting
Checkpointing Buffer Size]
== Configuration Properties
diff --git a/docs/_docs/quick-start/cpp.adoc b/docs/_docs/quick-start/cpp.adoc
index 08aa6df11d6..c68cf97bedf 100644
--- a/docs/_docs/quick-start/cpp.adoc
+++ b/docs/_docs/quick-start/cpp.adoc
@@ -146,7 +146,7 @@ From here, you may want to:
* Check out the link:thin-clients/cpp-thin-client[C++ thin client] that
provides a lightweight form of connectivity
to Ignite clusters
* Explore the link:{githubUrl}/modules/platforms/cpp/examples[additional C++
examples] included with Ignite
-* Refer to the link:cpp-specific[C{plus}{plus} specific section] of the
documentation to learn more about capabilities
+* Refer to the link:cpp-specific/cpp-serialization[C{plus}{plus} specific
section] of the documentation to learn more about capabilities
that are available for C++ applications
diff --git a/docs/_docs/quick-start/dotnet.adoc
b/docs/_docs/quick-start/dotnet.adoc
index e9650c73a17..0591bd0ee06 100644
--- a/docs/_docs/quick-start/dotnet.adoc
+++ b/docs/_docs/quick-start/dotnet.adoc
@@ -89,7 +89,7 @@ From here, you may want to:
* Check out the link:thin-clients/dotnet-thin-client[.NET thin client] that
provides a lightweight form of connectivity
to Ignite clusters
* Explore the link:{githubUrl}/modules/platforms/dotnet/examples[additional
examples] included with Ignite
-* Refer to the link:net-specific[NET-specific section] of the documentation to
learn more about capabilities
+* Refer to the link:net-specific/index[NET-specific section] of the
documentation to learn more about capabilities
that are available for C# and .NET applications.
diff --git a/docs/_docs/security/security-model.adoc
b/docs/_docs/security/security-model.adoc
index eb02d4472cd..1f4b2548664 100644
--- a/docs/_docs/security/security-model.adoc
+++ b/docs/_docs/security/security-model.adoc
@@ -16,5 +16,5 @@
When it comes to Apache Ignite security, it is very important to note that by
having access to any Ignite cluster node (a server node or a thick client node)
it is possible to perform malicious actions on the cluster. There are no
mechanisms that could provide protection for the cluster in such scenarios.
-Therefore, all
link:../clustering/network-configuration.adoc#_discovery[Discovery] and
link:../clustering/network-configuration.adoc#_communication[Communication]
ports for Ignite server and thick client nodes should only be available inside
a protected subnetwork (the so-called demilitarized zone or DMZ). Should those
ports be exposed outside of DMZ, it is advised to control access to them by
using SSL certificates issued by a trusted Certification Authority (please see
this link:ssl-tl [...]
+Therefore, all link:clustering/network-configuration#_discovery[Discovery] and
link:clustering/network-configuration#_communication[Communication] ports for
Ignite server and thick client nodes should only be available inside a
protected subnetwork (the so-called demilitarized zone or DMZ). Should those
ports be exposed outside of DMZ, it is advised to control access to them by
using SSL certificates issued by a trusted Certification Authority (please see
this link:security/ssl-tls[page] [...]
diff --git a/docs/_docs/services/services.adoc
b/docs/_docs/services/services.adoc
index 7127e11d505..46801d4e661 100644
--- a/docs/_docs/services/services.adoc
+++ b/docs/_docs/services/services.adoc
@@ -235,8 +235,8 @@ tab:C++[]
// TODO the @ServiceResource annotation
== Service Awareness [[service_awareness]]
-For link:../thin-clients/java-thin-client.adoc#java_thin_client[Java Thin
Client] you can activate Service Awareness.
-To do that, enable
link:../thin-clients/java-thin-client.adoc#partition_awareness[Partition
Awareness].
+For link:thin-clients/java-thin-client#java_thin_client[Java Thin Client] you
can activate Service Awareness.
+To do that, enable
link:thin-clients/java-thin-client#partition_awareness[Partition Awareness].
Without Service Awareness, the invocation requests are sent to a random node.
If it has no service
instance deployed, the request is redirected to a different node. This
additional network hop adds overhead.
diff --git a/docs/_docs/setup.adoc b/docs/_docs/setup.adoc
index cdb204007bd..ae27ce2af98 100644
--- a/docs/_docs/setup.adoc
+++ b/docs/_docs/setup.adoc
@@ -218,7 +218,7 @@ The following modules are available:
|ignite-ml | Ignite ML Grid provides machine learning features and relevant
data structures and methods of linear algebra, including on heap and off heap,
dense and sparse, local and distributed implementations.
-Refer to the link:machine-learning/ml[Machine Learning] documentation for
details.
+Refer to the link:machine-learning/machine-learning[Machine Learning]
documentation for details.
|ignite-rest-http | Ignite REST-HTTP starts a Jetty-based server within a node
that can be used to execute tasks and/or cache commands in grid using
HTTP-based link:restapi[RESTful APIs].
diff --git a/docs/_docs/snapshots/snapshots.adoc
b/docs/_docs/snapshots/snapshots.adoc
index 00a0eac64cb..fe61b3c4923 100644
--- a/docs/_docs/snapshots/snapshots.adoc
+++ b/docs/_docs/snapshots/snapshots.adoc
@@ -388,7 +388,7 @@ The snapshot procedure has some limitations that you should
be aware of before u
* You can have only one snapshotting operation running at a time.
* The snapshot operation is prohibited during a master key change and/or cache
group key change.
* The snapshot procedure is interrupted if a server node leaves the cluster.
-* Concurrent updates from
link:../data-streaming.adoc#_limitations[DataStreamer] with default setting
'allowOverwrite'
+* Concurrent updates from link:data-streaming#_limitations[DataStreamer] with
default setting 'allowOverwrite'
(false) into a persistent cache can cause that cache data stored inconsistent.
If any of these limitations prevent you from using Apache Ignite, then select
alternate snapshotting implementations for
diff --git a/docs/_docs/sql-reference/ddl.adoc
b/docs/_docs/sql-reference/ddl.adoc
index 048ee614396..2ebad5484ac 100644
--- a/docs/_docs/sql-reference/ddl.adoc
+++ b/docs/_docs/sql-reference/ddl.adoc
@@ -64,7 +64,7 @@ or the `SQL_{SCHEMA_NAME}_{TABLE}` format will be used if the
parameter not spec
** `KEY_TYPE=<custom name of the key type>` - sets the name of the custom key
type that is used from the key-value APIs in Ignite. The name should correspond
to a Java, .NET, or C++ class, or it can be a random one if
link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used
instead of a custom class. The number of fields and their types in the custom
key type has to correspond to the `PRIMARY KEY`. Refer to the <<Use non-SQL
API>> section below for more details.
** `VALUE_TYPE=<custom name of the value type of the new cache>` - sets the
name of a custom value type that is used from the key-value and other non-SQL
APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it
can be a random one if
link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used
instead of a custom class. The value type should include all the columns
defined in the CREATE TABLE command except for those listed in the `PRIMARY
KEY` constraint. Refer to the <<Use non-SQL API>> section below for more
details.
-Also, the same `VALUE_TYPE` is required to use SQL queries over data
replicated with
link:extensions-and-integrations/change-data-capture-extensions[CDC].
+Also, the same `VALUE_TYPE` is required to use SQL queries over data
replicated with
link:extensions-and-integrations/change-data-capture/overview[CDC].
** `WRAP_KEY=<true | false>` - this flag controls whether a _single column_
`PRIMARY KEY` should be wrapped in the
link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or
not. By default, this flag is set to false. This flag does not have any effect
on the `PRIMARY KEY` with multiple columns; it always gets wrapped regardless
of the value of the parameter.
** `WRAP_VALUE=<true | false>` - this flag controls whether a single column
value of a primitive type should be wrapped in the
link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or
not. By default, this flag is set to true. This flag does not have any effect
on the value with multiple columns; it always gets wrapped regardless of the
value of the parameter. Set this parameter to false if you have a single column
value and do not plan to add additional columns to [...]
diff --git a/docs/_docs/sql-reference/operational-commands.adoc
b/docs/_docs/sql-reference/operational-commands.adoc
index 875fc30431b..97f56ed422d 100644
--- a/docs/_docs/sql-reference/operational-commands.adoc
+++ b/docs/_docs/sql-reference/operational-commands.adoc
@@ -68,7 +68,7 @@ To stream data into your cluster, prepare a file with the
`SET STREAMING ON` com
[NOTE]
====
-Setting 'STREAMING ON' uses
link:../data-streaming.adoc#_limitations[DataStreamer] which doesn't guarantee
by default data consistency until successfully finished.
+Setting 'STREAMING ON' uses link:data-streaming#limitations[DataStreamer]
which doesn't guarantee by default data consistency until successfully finished.
====
[source,sql]
diff --git a/docs/_docs/starting-nodes.adoc b/docs/_docs/starting-nodes.adoc
index ae65072a5b9..3d8f257a3d7 100644
--- a/docs/_docs/starting-nodes.adoc
+++ b/docs/_docs/starting-nodes.adoc
@@ -182,7 +182,7 @@ CAUTION: If you have a cache without partition backups and
you stop a node (even
When this property is set, the last node in the cluster will not stop
gracefully.
You will have to terminate the process by sending the `kill -9` signal.
-If you want to shut down the entire cluster,
link:control-script#deactivating-cluster[deactivate] it and then stop all the
nodes.
+If you want to shut down the entire cluster,
link:tools/control-script#deactivating-cluster[deactivate] it and then stop all
the nodes.
Alternatively, you can stop all the nodes non-gracefully (by sending `kill
-9`).
However, the latter option is not recommended for clusters with persistence.
////
diff --git a/docs/_docs/thin-clients/java-thin-client.adoc
b/docs/_docs/thin-clients/java-thin-client.adoc
index 403cdf4d10a..33387aaaf6b 100644
--- a/docs/_docs/thin-clients/java-thin-client.adoc
+++ b/docs/_docs/thin-clients/java-thin-client.adoc
@@ -145,7 +145,7 @@ Also, you can check a
link:https://github.com/apache/ignite/blob/master/examples
[NOTE]
====
-Partition Awareness also enables
link:../services/services.adoc#service_awareness[Service Awareness]
+Partition Awareness also enables
link:services/services#service_awareness[Service Awareness]
====
== Using Key-Value API
diff --git a/docs/_docs/thin-clients/nodejs-thin-client.adoc
b/docs/_docs/thin-clients/nodejs-thin-client.adoc
index 4f7d9bc3bb4..6ce6581a227 100644
--- a/docs/_docs/thin-clients/nodejs-thin-client.adoc
+++ b/docs/_docs/thin-clients/nodejs-thin-client.adoc
@@ -197,7 +197,7 @@ include::{source_code_dir}/scanquery.js[tag="scan-query",
indent=0]
----
== Executing SQL Statements
-The Node.js thin client supports all link:sql-reference[SQL commands] that are
supported by Ignite.
+The Node.js thin client supports all link:sql-reference/sql-conformance[SQL
commands] that are supported by Ignite.
The commands are executed via the `query(SqlFieldQuery)` method of the cache
object.
The method accepts an instance of `SqlFieldsQuery` that represents a SQL
statement and returns an instance of the `SqlFieldsCursor` class. Use the
cursor to iterate over the result set or get all results at once.
diff --git a/docs/_docs/thin-clients/php-thin-client.adoc
b/docs/_docs/thin-clients/php-thin-client.adoc
index 3b99ad54f6e..5a820fb45e5 100644
--- a/docs/_docs/thin-clients/php-thin-client.adoc
+++ b/docs/_docs/thin-clients/php-thin-client.adoc
@@ -124,7 +124,7 @@
include::code-snippets/php/UsingKeyValueApi.php[tag=scanQry,indent=0]
----
== Executing SQL Statements
-The PHP thin client supports all link:sql-reference[SQL commands] that are
supported by Ignite.
+The PHP thin client supports all link:sql-reference/sql-conformance[SQL
commands] that are supported by Ignite.
The commands are executed via the `query(SqlFieldQuery)` method of the cache
object.
The method accepts an instance of `SqlFieldsQuery` that represents a SQL
statement.
The `query()` method returns a cursor object with the standard PHP Iterator
interface — use this cursor to iterate over the result set lazily, one by one.
In addition, the cursor has methods to get all results at once.