This is an automated email from the ASF dual-hosted git repository.

fanningpj pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-projection.git


The following commit(s) were added to refs/heads/main by this push:
     new 838c7fd  remove spanner docs and change some Akka refs to Pekko (#9)
838c7fd is described below

commit 838c7fd3709881cecf6af1687e1d1d278fb66045
Author: PJ Fanning <[email protected]>
AuthorDate: Wed Jan 25 14:04:13 2023 +0000

    remove spanner docs and change some Akka refs to Pekko (#9)
    
    * remove spanner docs and change some Akka refs to Pekko
    
    * more Akka refs
---
 docs/release-train-issue-template.md               |  2 +-
 docs/src/main/paradox/actor.md                     |  2 +-
 docs/src/main/paradox/cassandra.md                 | 18 ++++++-------
 docs/src/main/paradox/classic.md                   |  6 ++---
 docs/src/main/paradox/durable-state.md             | 14 +++++-----
 docs/src/main/paradox/eventsourced.md              | 18 ++++++-------
 docs/src/main/paradox/flow.md                      |  4 +--
 .../paradox/getting-started/event-generator-app.md |  2 +-
 docs/src/main/paradox/getting-started/index.md     |  6 ++---
 .../paradox/getting-started/running-cluster.md     |  8 +++---
 docs/src/main/paradox/getting-started/running.md   | 10 ++++----
 .../main/paradox/getting-started/setup-your-app.md | 10 ++++----
 .../paradox/getting-started/source-provider.md     |  8 +++---
 docs/src/main/paradox/getting-started/testing.md   |  4 +--
 docs/src/main/paradox/jdbc.md                      | 20 +++++++--------
 docs/src/main/paradox/kafka.md                     | 12 ++++-----
 docs/src/main/paradox/overview.md                  | 30 +++++++++++-----------
 docs/src/main/paradox/projection-settings.md       |  2 +-
 docs/src/main/paradox/running.md                   | 24 ++++++++---------
 docs/src/main/paradox/slick.md                     | 20 +++++++--------
 docs/src/main/paradox/testing.md                   | 10 ++++----
 docs/src/main/paradox/use-cases.md                 |  2 +-
 .../test/java/jdocs/guide/EventGeneratorApp.java   |  2 +-
 .../test/scala/docs/guide/EventGeneratorApp.scala  |  2 +-
 projection-core/src/main/resources/reference.conf  |  2 +-
 25 files changed, 115 insertions(+), 123 deletions(-)

diff --git a/docs/release-train-issue-template.md 
b/docs/release-train-issue-template.md
index 22709fa..2feb8d7 100644
--- a/docs/release-train-issue-template.md
+++ b/docs/release-train-issue-template.md
@@ -1,4 +1,4 @@
-Release Akka Projections $VERSION$
+Release Apache Pekko Projections $VERSION$
 
 <!--
 
diff --git a/docs/src/main/paradox/actor.md b/docs/src/main/paradox/actor.md
index 8679748..927a910 100644
--- a/docs/src/main/paradox/actor.md
+++ b/docs/src/main/paradox/actor.md
@@ -1,6 +1,6 @@
 # Processing with Actor
 
-A good alternative for advanced state management is to implement the handler 
as an [actor](https://doc.akka.io/docs/akka/current/typed/actors.html).
+A good alternative for advanced state management is to implement the handler 
as an [actor](https://pekko.apache.org/docs/pekko/current/typed/actors.html).
 
 The following example is using the `CassandraProjection` but the handler and 
actor would be the same if used
 any other @ref:[offset storage](overview.md). 
diff --git a/docs/src/main/paradox/cassandra.md 
b/docs/src/main/paradox/cassandra.md
index 2444b5d..cdd0464 100644
--- a/docs/src/main/paradox/cassandra.md
+++ b/docs/src/main/paradox/cassandra.md
@@ -2,7 +2,7 @@
 
 The @apidoc[CassandraProjection$] has support for storing the offset in 
Cassandra.
 
-The source of the envelopes can be @ref:[events from Akka 
Persistence](eventsourced.md) or any other `SourceProvider`
+The source of the envelopes can be @ref:[events from Apache Pekko 
Persistence](eventsourced.md) or any other `SourceProvider`
 with supported @ref:[offset types](#offset-types).
 
 The envelope handler can integrate with anything, such as publishing to a 
message broker, or updating a read model
@@ -13,7 +13,7 @@ processing semantics, but not exactly-once.
 
 ## Dependencies
 
-To use the Cassandra module of Akka Projections add the following dependency 
in your project:
+To use the Cassandra module of Apache Pekko Projections add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -21,7 +21,7 @@ To use the Cassandra module of Akka Projections add the 
following dependency in
   version=$project.version$
 }
 
-Akka Projections requires Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections requires Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-cassandra" }
 
@@ -215,13 +215,13 @@ Java
 
 ### Actor handler
 
-A good alternative for advanced state management is to implement the handler 
as an [actor](https://doc.akka.io/docs/akka/current/typed/actors.html),
+A good alternative for advanced state management is to implement the handler 
as an [actor](https://pekko.apache.org/docs/pekko/current/typed/actors.html),
 which is described in @ref:[Processing with Actor](actor.md).
 
 ### Flow handler
 
-An Akka Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
-which is described in @ref:[Processing with Akka Streams](flow.md).
+An Apache Pekko Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
+which is described in @ref:[Processing with Apache Pekko Streams](flow.md).
 
 ### Handler lifecycle
 
@@ -266,13 +266,11 @@ CREATE TABLE IF NOT EXISTS 
akka_projection.projection_management (
 
 The supported offset types of the `CassandraProjection` are:
 
-* `akka.persistence.query.Offset` types from @ref:[events from Akka 
Persistence](eventsourced.md)
+* `akka.persistence.query.Offset` types from @ref:[events from Apache Pekko 
Persistence](eventsourced.md)
 * `String`
 * @scala[`Int`]@java[Integer]
 * `Long`
 * Any other type that has a configured Akka Serializer is stored with base64 
encoding of the serialized bytes.
-  For example the [Akka Persistence 
Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/) offset
-  is supported in this way. 
 
 @@@ note
 
@@ -301,7 +299,7 @@ One important setting is to configure the database driver 
to retry the initial c
 
 It is not enabled automatically as it is in the driver's reference.conf and is 
not overridable in a profile.
 
-It is possible to share the same Cassandra session as [Akka Persistence 
Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/)
+It is possible to share the same Cassandra session as [Apache Pekko 
Persistence 
Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/)
 by setting the `session-config-path`:
 
 ```
diff --git a/docs/src/main/paradox/classic.md b/docs/src/main/paradox/classic.md
index 9718e0b..e539d87 100644
--- a/docs/src/main/paradox/classic.md
+++ b/docs/src/main/paradox/classic.md
@@ -1,7 +1,7 @@
 # Akka Classic
 
-Akka Projections can be used with the [new Actor 
API](https://doc.akka.io/docs/akka/current/typed/actors.html) or
-the [classic Actor 
API](https://doc.akka.io/docs/akka/current/index-classic.html). The 
documentation samples
+Apache Pekko Projections can be used with the [new Actor 
API](https://pekko.apache.org/docs/pekko/current/typed/actors.html) or
+the [classic Actor 
API](https://pekko.apache.org/docs/pekko/current/index-classic.html). The 
documentation samples
 show the new Actor API, and this page highlights how to use it with the 
classic Actor API.
 
 ## Actor System
@@ -20,7 +20,7 @@ Java
 @ref:[Events from Akka Classic Persistence](eventsourced.md) can be emitted 
from `PersistentActor` and consumed by a
 Projection with the @apidoc[EventSourcedProvider$]. The events from the 
`PersistentActor` must be tagged by wrapping
 them in `akka.persistence.journal.Tagged`, which can be done in the 
`PersistentActor` or by using
-[Event 
Adapters](https://doc.akka.io/docs/akka/current/persistence.html#event-adapters).
+[Event 
Adapters](https://pekko.apache.org/docs/pekko/current/persistence.html#event-adapters).
 
 ## Running
 
diff --git a/docs/src/main/paradox/durable-state.md 
b/docs/src/main/paradox/durable-state.md
index 40d0e9c..ceffe5b 100644
--- a/docs/src/main/paradox/durable-state.md
+++ b/docs/src/main/paradox/durable-state.md
@@ -1,13 +1,13 @@
 # Changes from Durable State
 
-A typical source for Projections is the change stored with 
@apidoc[DurableStateBehavior$] in [Akka 
Persistence](https://doc.akka.io/docs/akka/current/typed/durable-state/persistence.html).
 Durable state changes can be 
[tagged](https://doc.akka.io/docs/akka/current/typed/durable-state/persistence.html#tagging)
 and then
-consumed with the [changes 
query](https://doc.akka.io/docs/akka/current/durable-state/persistence-query.html#using-query-with-akka-projections).
+A typical source for Projections is the change stored with 
@apidoc[DurableStateBehavior$] in [Apache Pekko 
Persistence](https://pekko.apache.org/docs/pekko/current/typed/durable-state/persistence.html).
 Durable state changes can be 
[tagged](https://pekko.apache.org/docs/pekko/current/typed/durable-state/persistence.html#tagging)
 and then
+consumed with the [changes 
query](https://pekko.apache.org/docs/pekko/current/durable-state/persistence-query.html#using-query-with-akka-projections).
 
-Akka Projections has integration with `changes`, which is described here. 
+Apache Pekko Projections has integration with `changes`, which is described 
here. 
 
 ## Dependencies
 
-To use the Durable State module of Akka Projections, add the following 
dependency in your project:
+To use the Durable State module of Apache Pekko Projections, add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -15,7 +15,7 @@ To use the Durable State module of Akka Projections, add the 
following dependenc
   version=$project.version$
 }
 
-Akka Projections requires Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections requires Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-durable-state" }
 
@@ -36,7 +36,7 @@ Scala
 Java
 :  @@snip 
[DurableStateStoreDocExample.java](/examples/src/test/java/jdocs/state/DurableStateStoreDocExample.java)
 { #changesByTagSourceProvider }
 
-This example is using the [DurableStateStore JDBC plugin for Akka 
Persistence](https://doc.akka.io/docs/akka-persistence-jdbc/current/durable-state-store.html).
+This example is using the [DurableStateStore JDBC plugin for Apache Pekko 
Persistence](https://doc.akka.io/docs/akka-persistence-jdbc/current/durable-state-store.html).
 You will use the same plugin that you configured for the write side. The one 
that is used by the `DurableStateBehavior`.
 
 This source is consuming all the changes from the `Account` 
`DurableStateBehavior` that are tagged with `"bank-accounts-1"`. In a 
production application, you would need to start as many instances as the number 
of different tags you used. That way you consume the changes from all entities.
@@ -56,7 +56,7 @@ Scala
 Java
 :  @@snip 
[DurableStateStoreDocExample.java](/examples/src/test/java/jdocs/state/DurableStateStoreBySlicesDocExample.java)
 { #changesBySlicesSourceProvider }
 
-This example is using the [R2DBC plugin for Akka 
Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/query.html).
+This example is using the [R2DBC plugin for Apache Pekko 
Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/query.html).
 You will use the same plugin that you configured for the write side. The one 
that is used by the `DurableStateBehavior`.
 
 This source is consuming all the changes from the `Account` 
`DurableStateBehavior` for the given slice range. In a production application, 
you would need to start as many instances as the number of slice ranges. That 
way you consume the changes from all entities.
diff --git a/docs/src/main/paradox/eventsourced.md 
b/docs/src/main/paradox/eventsourced.md
index dd1b60a..0a7cc25 100644
--- a/docs/src/main/paradox/eventsourced.md
+++ b/docs/src/main/paradox/eventsourced.md
@@ -1,13 +1,13 @@
-# Events from Akka Persistence
+# Events from Apache Pekko Persistence
 
-A typical source for Projections is events stored with 
@apidoc[EventSourcedBehavior$] in [Akka 
Persistence](https://doc.akka.io/docs/akka/current/typed/persistence.html). 
Events can be 
[tagged](https://doc.akka.io/docs/akka/current/typed/persistence.html#tagging) 
and then
-consumed with the [eventsByTag 
query](https://doc.akka.io/docs/akka/current/persistence-query.html#eventsbytag-and-currenteventsbytag).
+A typical source for Projections is events stored with 
@apidoc[EventSourcedBehavior$] in [Apache Pekko 
Persistence](https://pekko.apache.org/docs/pekko/current/typed/persistence.html).
 Events can be 
[tagged](https://pekko.apache.org/docs/pekko/current/typed/persistence.html#tagging)
 and then
+consumed with the [eventsByTag 
query](https://pekko.apache.org/docs/pekko/current/persistence-query.html#eventsbytag-and-currenteventsbytag).
 
-Akka Projections has integration with `eventsByTag`, which is described here. 
+Apache Pekko Projections has integration with `eventsByTag`, which is 
described here. 
 
 ## Dependencies
 
-To use the Event Sourced module of Akka Projections add the following 
dependency in your project:
+To use the Event Sourced module of Apache Pekko Projections add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -15,7 +15,7 @@ To use the Event Sourced module of Akka Projections add the 
following dependency
   version=$project.version$
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-eventsourced" }
 
@@ -36,8 +36,8 @@ Scala
 Java
 :  @@snip 
[EventSourcedDocExample.java](/examples/src/test/java/jdocs/eventsourced/EventSourcedDocExample.java)
 { #eventsByTagSourceProvider }
 
-This example is using the [Cassandra plugin for Akka 
Persistence](https://doc.akka.io/docs/akka-persistence-cassandra/current/read-journal.html),
-but same code can be used for other Akka Persistence plugins by replacing the 
`CassandraReadJournal.Identifier`.
+This example is using the [Cassandra plugin for Apache Pekko 
Persistence](https://doc.akka.io/docs/akka-persistence-cassandra/current/read-journal.html),
+but same code can be used for other Apache Pekko Persistence plugins by 
replacing the `CassandraReadJournal.Identifier`.
 For example the [JDBC 
plugin](https://doc.akka.io/docs/akka-persistence-jdbc/current/) can be used. 
You will
 use the same plugin as you have configured for the write side that is used by 
the `EventSourcedBehavior`.
 
@@ -61,7 +61,7 @@ Scala
 Java
 :  @@snip 
[EventSourcedDocExample.java](/examples/src/test/java/jdocs/eventsourced/EventSourcedBySlicesDocExample.java)
 { #eventsBySlicesSourceProvider }
 
-This example is using the [R2DBC plugin for Akka 
Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/query.html).
+This example is using the [R2DBC plugin for Apache Pekko 
Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/query.html).
 You will use the same plugin as you have configured for the write side that is 
used by the `EventSourcedBehavior`.
 
 This source is consuming all events from the `ShoppingCart` 
`EventSourcedBehavior` for the given slice range. In a production application, 
you would need to start as many instances as the number of slice ranges. That 
way you consume the events from all entities.
diff --git a/docs/src/main/paradox/flow.md b/docs/src/main/paradox/flow.md
index 675d827..dede30d 100644
--- a/docs/src/main/paradox/flow.md
+++ b/docs/src/main/paradox/flow.md
@@ -1,6 +1,6 @@
-# Processing with Akka Streams
+# Processing with Apache Pekko Streams
 
-An Akka Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes with at-least-once
+An Apache Pekko Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes with at-least-once
 semantics.
 
 The following example is using the `CassandraProjection` but the flow would be 
the same if used
diff --git a/docs/src/main/paradox/getting-started/event-generator-app.md 
b/docs/src/main/paradox/getting-started/event-generator-app.md
index f32f7fd..308a7ba 100644
--- a/docs/src/main/paradox/getting-started/event-generator-app.md
+++ b/docs/src/main/paradox/getting-started/event-generator-app.md
@@ -3,7 +3,7 @@
 This is a simulation of fake Event Sourced shopping carts. The details of this 
implementation is not
 important for understanding Projections. It's needed for @ref:[running the 
example](running.md).
 
-Please look at the [Akka reference documentation for Event 
Sourcing](https://doc.akka.io/docs/akka/current/typed/persistence.html)
+Please look at the [Apache Pekko reference documentation for Event 
Sourcing](https://pekko.apache.org/docs/pekko/current/typed/persistence.html)
 for how to implement real `EventSourcedBehavior`.  
 
 Add the `EventGeneratorApp` to your project:
diff --git a/docs/src/main/paradox/getting-started/index.md 
b/docs/src/main/paradox/getting-started/index.md
index 73f5234..065bb21 100644
--- a/docs/src/main/paradox/getting-started/index.md
+++ b/docs/src/main/paradox/getting-started/index.md
@@ -14,12 +14,12 @@ The example used in this guide is based on a more complete 
application that is p
 * [Build a Stateful Projection handler](projection-handler.md)
 * [Writing tests for a Projection](testing.md)
 * [Running the Projection](running.md)
-* [Running the Projection in Akka Cluster](running-cluster.md)
+* [Running the Projection in Apache Pekko Cluster](running-cluster.md)
 
 @@@
 
 ## Video Introduction
 
-This video on YouTube gives a short introduction to Akka Projections for 
processing a stream of events or records from a source to a projected model or 
external system.
+This video on YouTube gives a short introduction to Apache Pekko Projections 
for processing a stream of events or records from a source to a projected model 
or external system.
 
-[![Akka Projections 
introduction](../assets/intro-video.png)](http://www.youtube.com/watch?v=0toyKxomdwo
 "Watch video on YouTube")
+[![Apache Pekko Projections 
introduction](../assets/intro-video.png)](http://www.youtube.com/watch?v=0toyKxomdwo
 "Watch video on YouTube")
diff --git a/docs/src/main/paradox/getting-started/running-cluster.md 
b/docs/src/main/paradox/getting-started/running-cluster.md
index 7a91dac..72792cb 100644
--- a/docs/src/main/paradox/getting-started/running-cluster.md
+++ b/docs/src/main/paradox/getting-started/running-cluster.md
@@ -1,9 +1,9 @@
-# Running the Projection in Akka Cluster
+# Running the Projection in Apache Pekko Cluster
 
-Running the Projection with [Akka 
Cluster](https://doc.akka.io/docs/akka/current/typed/cluster.html) allows us to 
add two important aspects to our system: availability and scalability.
+Running the Projection with [Apache Pekko 
Cluster](https://pekko.apache.org/docs/pekko/current/typed/cluster.html) allows 
us to add two important aspects to our system: availability and scalability.
 A Projection running as a single Actor creates a single point of failure 
(availability), when the app shuts down for any reason, the projection is no 
longer running until it's started again.
 A Projection running as a single Actor creates a processing bottleneck 
(scalability), all messages from the @apidoc[SourceProvider] are processed by a 
single Actor on a single machine.
-By using a [Sharded Daemon 
Process](https://doc.akka.io/docs/akka/current/typed/cluster-sharded-daemon-process.html#sharded-daemon-process)
 with Akka Cluster and [Akka Cluster 
Sharding](https://doc.akka.io/docs/akka/current/typed/cluster-sharding.html) we 
can scale up the Projection and make it more available by running at least as 
many instances of the same Projection as we have cluster members.
+By using a [Sharded Daemon 
Process](https://pekko.apache.org/docs/pekko/current/typed/cluster-sharded-daemon-process.html#sharded-daemon-process)
 with Apache Pekko Cluster and [Apache Pekko Cluster 
Sharding](https://pekko.apache.org/docs/pekko/current/typed/cluster-sharding.html)
 we can scale up the Projection and make it more available by running at least 
as many instances of the same Projection as we have cluster members.
 As Akka cluster members join and leave the cluster the Sharded Daemon Process 
will automatically scale and rebalance Sharded Daemon Processes (Projection 
instances) accordingly.
 
 Running the Projection as a Sharded Daemon Process requires no changes to our 
projection handler and repository, we only need to change the way in which the 
actor that runs the Projection is initialized.
@@ -28,7 +28,7 @@ Before running the app we must first run the 
`EventGeneratorApp` in `cluster` mo
 Shopping cart events are tagged in a similar way to the sharded entities 
themselves.
 Given a sequence of tags from `0..n` a hash is generated using the sharding 
entity key, the shopping cart id.
 The hash is modded `%` by the number of tags in the sequence to choose a tag 
from the sequence.
-See the @ref:[Tagging Events in 
EventSourcedBehavior](../running.md#tagging-events-in-eventsourcedbehavior) 
section of the documentation for an example of how events can be tagged with 
Akka Persistence.
+See the @ref:[Tagging Events in 
EventSourcedBehavior](../running.md#tagging-events-in-eventsourcedbehavior) 
section of the documentation for an example of how events can be tagged with 
Apache Pekko Persistence.
 
 The same `EventGeneratorApp` from the previous @ref:[Running the 
Projection](running.md) section can be used to generate events for this app 
with an additional argument `cluster`.
 Run the app:
diff --git a/docs/src/main/paradox/getting-started/running.md 
b/docs/src/main/paradox/getting-started/running.md
index 3ae44c1..9a40941 100644
--- a/docs/src/main/paradox/getting-started/running.md
+++ b/docs/src/main/paradox/getting-started/running.md
@@ -41,13 +41,13 @@ PRIMARY KEY (item_id));
 ```
 
 Source events are generated with the `EventGeneratorApp`.
-This app is configured to use [Akka Persistence 
Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/index.html)
 and [Akka Cluster](https://doc.akka.io/docs/akka/current/typed/cluster.html) 
[Sharding](https://doc.akka.io/docs/akka/current/typed/cluster-sharding.html) 
to persist random `ShoppingCartApp.Events` to a journal.
+This app is configured to use [Apache Pekko Persistence 
Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/index.html)
 and [Apache Pekko 
Cluster](https://pekko.apache.org/docs/pekko/current/typed/cluster.html) 
[Sharding](https://pekko.apache.org/docs/pekko/current/typed/cluster-sharding.html)
 to persist random `ShoppingCartApp.Events` to a journal.
 It will checkout a shopping cart with random items and quantities every 1 
second.
-The app will automatically create all the Akka Persistence infrastructure 
tables in the `akka` keyspace.
-We won't go into any further detail about how this app functions because it 
falls outside the scope of Akka Projections.
-To learn more about the writing events with [Akka Persistence see the Akka 
documentation](https://doc.akka.io/docs/akka/current/typed/index-persistence.html).
+The app will automatically create all the Apache Pekko Persistence 
infrastructure tables in the `akka` keyspace.
+We won't go into any further detail about how this app functions because it 
falls outside the scope of Apache Pekko Projections.
+To learn more about the writing events with [Apache Pekko Persistence see the 
Apache Pekko 
documentation](https://pekko.apache.org/docs/pekko/current/typed/index-persistence.html).
 
-Add the Akka Cluster Sharding library to your project:
+Add the Apache Pekko Cluster Sharding library to your project:
 
 @@dependency [sbt,Maven,Gradle] {
 group=org.apache.pekko
diff --git a/docs/src/main/paradox/getting-started/setup-your-app.md 
b/docs/src/main/paradox/getting-started/setup-your-app.md
index acfed40..00b4132 100644
--- a/docs/src/main/paradox/getting-started/setup-your-app.md
+++ b/docs/src/main/paradox/getting-started/setup-your-app.md
@@ -1,6 +1,6 @@
 # Setup your application
 
-Add the Akka Projections core library to a new project.
+Add the Apache Pekko Projections core library to a new project.
 This isn't strictly required, because as we add other dependencies in the 
following steps it will transitively include core as a dependency, but it never 
hurts to be explicit.
 
 @@dependency [sbt,Maven,Gradle] {
@@ -18,8 +18,8 @@ Scala
 Java
 :  @@snip 
[ShoppingCartEvents.java](/examples/src/test/java/jdocs/guide/ShoppingCartEvents.java)
 { #guideEvents }
 
-To enable serialization and deserialization of events with Akka Persistence 
it's necessary to define a base type for your event type hierarchy.
-In this guide we are using [Jackson 
Serialization](https://doc.akka.io/docs/akka/current/serialization-jackson.html).
+To enable serialization and deserialization of events with Apache Pekko 
Persistence it's necessary to define a base type for your event type hierarchy.
+In this guide we are using [Jackson 
Serialization](https://pekko.apache.org/docs/pekko/current/serialization-jackson.html).
 Add the `CborSerializable` base type to your project:
 
 Scala
@@ -29,7 +29,7 @@ Java
 :  @@snip 
[CborSerializable.java](/examples/src/test/java/jdocs/guide/CborSerializable.java)
 { #guideCbor }
 
 Configure the `CborSerializable` type to use `jackson-cbor` configuration in 
your `application.conf`.
-We will add this configuration when Akka Persistence configuration is setup in 
the @ref:[Choosing a SourceProvider](source-provider.md) section of the guide.
+We will add this configuration when Apache Pekko Persistence configuration is 
setup in the @ref:[Choosing a SourceProvider](source-provider.md) section of 
the guide.
 
 Scala
 :  @@snip 
[guide-shopping-cart-app.conf](/examples/src/test/resources/guide-shopping-cart-app.conf)
 { #guideSerializationBindingsScala }
@@ -45,7 +45,7 @@ In @scala[sbt you can add it your sbt project by adding it to 
the `javacOptions`
 @@@
 
 Define the persistence tags to be used in your project.
-Note that partitioned tags will be used later when @ref[running the projection 
in Akka Cluster](running-cluster.md).
+Note that partitioned tags will be used later when @ref[running the projection 
in Apache Pekko Cluster](running-cluster.md).
 Add `ShoppingCartTags` to your project:
 
 Scala
diff --git a/docs/src/main/paradox/getting-started/source-provider.md 
b/docs/src/main/paradox/getting-started/source-provider.md
index a7afd02..35d9bc8 100644
--- a/docs/src/main/paradox/getting-started/source-provider.md
+++ b/docs/src/main/paradox/getting-started/source-provider.md
@@ -3,7 +3,7 @@
 A @apidoc[SourceProvider] will provide the data to our projection. 
 In Projections each element that's processed is an `Envelope` and each 
`Envelope` contains an `Event`.
 An `Envelope` must include an `Offset`, but it can also contain other 
information such as creation timestamp, a topic name, an entity tag, etc.
-There are several supported Source Provider's available (or you can build your 
own), but in this example we will use the @ref:[Akka Persistence `EventSourced` 
Source Provider](../eventsourced.md).
+There are several supported Source Provider's available (or you can build your 
own), but in this example we will use the @ref:[Apache Pekko Persistence 
`EventSourced` Source Provider](../eventsourced.md).
 
 Add the following dependencies to your project:
 
@@ -22,8 +22,8 @@ Java
 :  @@snip 
[ShoppingCartApp.java](/examples/src/test/java/jdocs/guide/ShoppingCartApp.java)
 { #guideSourceProviderImports }
 
 Create the @apidoc[SourceProvider].
-The @ref:[Event Sourced Source Provider](../eventsourced.md) is using [Akka 
Persistence](https://doc.akka.io/docs/akka/current/typed/persistence.html) 
internally (specifically the 
[eventsByTag](https://doc.akka.io/docs/akka/current/persistence-query.html#eventsbytag-and-currenteventsbytag)
 API).
-To initialize the Source Provider we need to set parameters to choose the Akka 
Persistence plugin (Cassandra) to use as well as the name of the tag used for 
events we're interested in from the journal.
+The @ref:[Event Sourced Source Provider](../eventsourced.md) is using [Apache 
Pekko 
Persistence](https://pekko.apache.org/docs/pekko/current/typed/persistence.html)
 internally (specifically the 
[eventsByTag](https://pekko.apache.org/docs/pekko/current/persistence-query.html#eventsbytag-and-currenteventsbytag)
 API).
+To initialize the Source Provider we need to set parameters to choose the 
Apache Pekko Persistence plugin (Cassandra) to use as well as the name of the 
tag used for events we're interested in from the journal.
 
 Setup the `SourceProvider` in the Guardian `Behavior` defined in 
`ShoppingCartApp`:
 
@@ -33,6 +33,6 @@ Scala
 Java
 :  @@snip 
[ShoppingCartApp.java](/examples/src/test/java/jdocs/guide/ShoppingCartApp.java)
 { #guideSourceProviderSetup }
 
-Finally, we must configure Akka Persistence by adding a configuration file 
`guide-shopping-cart-app.conf` to the `src/main/resources/` directory of the 
project:
+Finally, we must configure Apache Pekko Persistence by adding a configuration 
file `guide-shopping-cart-app.conf` to the `src/main/resources/` directory of 
the project:
 
 @@snip 
[guide-shopping-cart-app.conf](/examples/src/test/resources/guide-shopping-cart-app.conf)
 { #guideConfig }
diff --git a/docs/src/main/paradox/getting-started/testing.md 
b/docs/src/main/paradox/getting-started/testing.md
index df2d12b..31f00d9 100644
--- a/docs/src/main/paradox/getting-started/testing.md
+++ b/docs/src/main/paradox/getting-started/testing.md
@@ -10,8 +10,8 @@ version=$project.version$
 }
 
 Import the 
@apidoc[akka.projection.testkit.(javadsl|scaladsl).ProjectionTestKit] and other 
utilities into a new 
-@scala[[ScalaTest](https://doc.akka.io/docs/akka/current/typed/testing-async.html#test-framework-integration)
 test spec]
-@java[[JUnit](https://doc.akka.io/docs/akka/current/typed/testing-async.html#test-framework-integration)
 test].
+@scala[[ScalaTest](https://pekko.apache.org/docs/pekko/current/typed/testing-async.html#test-framework-integration)
 test spec]
+@java[[JUnit](https://pekko.apache.org/docs/pekko/current/typed/testing-async.html#test-framework-integration)
 test].
 
 Scala
 :  @@snip 
[ShoppingCartAppSpec.scala](/examples/src/test/scala/docs/guide/ShoppingCartAppSpec.scala)
 { #testKitImports }
diff --git a/docs/src/main/paradox/jdbc.md b/docs/src/main/paradox/jdbc.md
index 2e99cc3..f8b3297 100644
--- a/docs/src/main/paradox/jdbc.md
+++ b/docs/src/main/paradox/jdbc.md
@@ -2,7 +2,7 @@
 
 The @apidoc[JdbcProjection$] has support for storing the offset in a 
relational database using JDBC.
 
-The source of the envelopes can be @ref:[events from Akka 
Persistence](eventsourced.md) or any other `SourceProvider`
+The source of the envelopes can be @ref:[events from Apache Pekko 
Persistence](eventsourced.md) or any other `SourceProvider`
 with supported @ref:[offset types](#offset-types).
 
 A @apidoc[JdbcHandler] receives a @apidoc[JdbcSession] instance and an 
envelope. The `JdbcSession` provides the means to access an open JDBC 
connection that can be used to process the envelope. The target database 
operations can be run in the same transaction as the storage of the offset, 
which means that @ref:[exactly-once](#exactly-once)
@@ -10,7 +10,7 @@ processing semantics is supported. It also offers 
@ref:[at-least-once](#at-least
 
 ## Dependencies
 
-To use the JDBC module of Akka Projections add the following dependency in 
your project:
+To use the JDBC module of Apache Pekko Projections add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -18,7 +18,7 @@ To use the JDBC module of Akka Projections add the following 
dependency in your
   version=$project.version$
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-jdbc" }
 
@@ -38,7 +38,7 @@ There are two settings that need to be set beforehand in your 
`application.conf`
 
 ## Defining a JdbcSession
 
-Before using Akka Projections JDBC you must implement a `JdbcSession` 
@scala[trait]@java[interface]. `JdbcSession` is used to open a connection and 
start a transaction. A new `JdbcSession` will be created for each call to the 
handler. At the end of the processing, the transaction will be committed (or 
rolled back). 
+Before using Apache Pekko Projections JDBC you must implement a `JdbcSession` 
@scala[trait]@java[interface]. `JdbcSession` is used to open a connection and 
start a transaction. A new `JdbcSession` will be created for each call to the 
handler. At the end of the processing, the transaction will be committed (or 
rolled back). 
 
 When using `JdbcProjection.exactlyOnce`, the `JdbcSession` that is passed to 
the handler will be used to save the offset behind the scenes. Therefore, it's 
extremely important to disable auto-commit (eg: `setAutoCommit(false)`), 
otherwise the two operations won't participate on the same transaction.  
 
@@ -68,7 +68,7 @@ Java
 
 ## Blocking JDBC Dispatcher
 
-JDBC APIs are blocking by design, therefore Akka Projections JDBC will use a 
dedicated dispatcher to run all JDBC calls. It's important to configure the 
dispatcher to have the same size as the connection pool. 
+JDBC APIs are blocking by design, therefore Apache Pekko Projections JDBC will 
use a dedicated dispatcher to run all JDBC calls. It's important to configure 
the dispatcher to have the same size as the connection pool. 
 
 Each time the projection handler is called one thread and one database 
connection will be used. If your connection pool is smaller than the number of 
threads, the thread can potentially block while waiting for the connection pool 
to provide a connection. 
 
@@ -192,13 +192,13 @@ Same type of handlers can be used with `JdbcProjection` 
instead of `CassandraPro
 
 ### Actor handler
 
-A good alternative for advanced state management is to implement the handler 
as an [actor](https://doc.akka.io/docs/akka/current/typed/actors.html),
+A good alternative for advanced state management is to implement the handler 
as an [actor](https://pekko.apache.org/docs/pekko/current/typed/actors.html),
 which is described in @ref:[Processing with Actor](actor.md).
 
 ### Flow handler
 
-An Akka Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
-which is described in @ref:[Processing with Akka Streams](flow.md).
+An Apache Pekko Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
+which is described in @ref:[Processing with Apache Pekko Streams](flow.md).
 
 ### Handler lifecycle
 
@@ -247,14 +247,12 @@ akka.projection.jdbc.offset-store {
 
 The supported offset types of the `JdbcProjection` are:
 
-* @apidoc[akka.persistence.query.Offset] types from @ref:[events from Akka 
Persistence](eventsourced.md)
+* @apidoc[akka.persistence.query.Offset] types from @ref:[events from Apache 
Pekko Persistence](eventsourced.md)
 * @apidoc[MergeableOffset] that is used for @ref:[messages from 
Kafka](kafka.md#mergeable-offset)
 * `String`
 * `Int`
 * `Long`
 * Any other type that has a configured Akka Serializer is stored with base64 
encoding of the serialized bytes.
-  For example the [Akka Persistence 
Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/) offset
-  is supported in this way.
 
 ## Configuration
 
diff --git a/docs/src/main/paradox/kafka.md b/docs/src/main/paradox/kafka.md
index 2ecff5a..2495899 100644
--- a/docs/src/main/paradox/kafka.md
+++ b/docs/src/main/paradox/kafka.md
@@ -1,10 +1,10 @@
 # Messages from and to Kafka
 
-A typical source for Projections is messages from Kafka. Akka Projections 
supports integration with Kafka using [Alpakka 
Kafka](https://doc.akka.io/docs/alpakka-kafka/current/).
+A typical source for Projections is messages from Kafka. Apache Pekko 
Projections supports integration with Kafka using [Alpakka 
Kafka](https://doc.akka.io/docs/alpakka-kafka/current/).
 
 The @apidoc[KafkaSourceProvider$] uses consumer group assignments from Kafka 
and can resume from offsets stored in a database.
 
-Akka Projections can store the offsets from Kafka in a @ref:[relational DB 
with JDBC](jdbc.md)
+Apache Pekko Projections can store the offsets from Kafka in a 
@ref:[relational DB with JDBC](jdbc.md)
 or in @ref:[relational DB with Slick](slick.md).
 
 The `JdbcProjection` @scala[or `SlickProjection`] envelope handler will be run 
by the projection. This means that the target database operations can be run in 
the same transaction as the storage of the offset, which means when used with 
@ref:[exactly-once](jdbc.md#exactly-once) the offsets will be persisted on the 
same transaction as the projected model (see @ref:[Committing offset outside 
Kafka](#committing-offset-outside-kafka)). It also offers 
@ref:[at-least-once](jdbc.md#at-least-onc [...]
@@ -19,7 +19,7 @@ A `Projection` can also @ref:[send messages to 
Kafka](#sending-to-kafka).
 
 ## Dependencies
 
-To use the Kafka module of Akka Projections add the following dependency in 
your project:
+To use the Kafka module of Apache Pekko Projections add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -27,7 +27,7 @@ To use the Kafka module of Akka Projections add the following 
dependency in your
   version=$project.version$
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-kafka" }
 
@@ -85,7 +85,7 @@ To mitigate that risk, you can increase the value of 
`akka.projection.kafka.read
 
 ## Committing offset in Kafka
 
-When using the approach of committing the offsets back to Kafka the [Alpakka 
Kafka 
comittableSource](https://doc.akka.io/docs/alpakka-kafka/current/consumer.html) 
can be used, and Akka Projections is not needed for that usage.
+When using the approach of committing the offsets back to Kafka the [Alpakka 
Kafka 
comittableSource](https://doc.akka.io/docs/alpakka-kafka/current/consumer.html) 
can be used, and Apache Pekko Projections is not needed for that usage.
 
 ## Sending to Kafka
 
@@ -151,7 +151,7 @@ Java
 ## Mergeable Offset
 
 The offset type for a projection is determined by the @apidoc[SourceProvider] 
that's used.
-Akka Projections supports a variety of offset types.
+Apache Pekko Projections supports a variety of offset types.
 In most cases an event is associated with a single offset row in the 
projection implementation's offset store, but the @apidoc[KafkaSourceProvider$] 
uses a special type of offset called a @apidoc[MergeableOffset].
 
 @apidoc[MergeableOffset] allows us to read and write a map of offsets to the 
projection offset store.
diff --git a/docs/src/main/paradox/overview.md 
b/docs/src/main/paradox/overview.md
index 90754af..f6ee1cc 100644
--- a/docs/src/main/paradox/overview.md
+++ b/docs/src/main/paradox/overview.md
@@ -1,15 +1,15 @@
 # Overview
 
-The purpose of Akka Projections is described in @ref:[Use Cases](use-cases.md).
+The purpose of Apache Pekko Projections is described in @ref:[Use 
Cases](use-cases.md).
 
-In Akka Projections you process a stream of events or records from a source to 
a projected model or external system.
+In Apache Pekko Projections you process a stream of events or records from a 
source to a projected model or external system.
 Each event is associated with an offset representing the position in the 
stream. This offset is used for
 resuming the stream from that position when the projection is restarted.
 
 As the source you can select from:
 
-* @ref:[Events from Akka Persistence](eventsourced.md)
-* @ref:[State changes from Akka Persistence](durable-state.md)
+* @ref:[Events from Apache Pekko Persistence](eventsourced.md)
+* @ref:[State changes from Apache Pekko Persistence](durable-state.md)
 * @ref:[Messages from Kafka](kafka.md)
 * Building your own @apidoc[SourceProvider]
 
@@ -20,27 +20,27 @@ For the offset storage you can select from:
 * @ref:[Offset in a relational DB with Slick](slick.md) (community-driven 
module)
 
 Those building blocks are assembled into a `Projection`. You can have many 
instances of it
-@ref:[automatically distributed and run](running.md) in an Akka Cluster.
+@ref:[automatically distributed and run](running.md) in an Apache Pekko 
Cluster.
 
 @@@ warning
 
-This module is currently marked as [May 
Change](https://doc.akka.io/docs/akka/current/common/may-change.html)
+This module is currently marked as [May 
Change](https://pekko.apache.org/docs/pekko/current/common/may-change.html)
 in the sense that the API might be changed based on feedback from initial 
usage.
 However, the module is ready for usage in production and we will not break 
serialization format of 
 messages or stored data.
 
 @@@
 
-To see a complete example of an Akka Projections implementation review the 
@ref:[Getting Started Guide](getting-started/index.md)
+To see a complete example of an Apache Pekko Projections implementation review 
the @ref:[Getting Started Guide](getting-started/index.md)
 or the @extref[Microservices with Akka 
tutorial](platform-guide:microservices-tutorial/).
 
 ## Dependencies
 
-Akka Projections consists of several modules for specific technologies. The 
dependency section for
+Apache Pekko Projections consists of several modules for specific 
technologies. The dependency section for
 each module describes which dependency you should define in your project.
 
-* @ref:[Events from Akka Persistence](eventsourced.md)
-* @ref:[State changes from Akka Persistence](durable-state.md)
+* @ref:[Events from Apache Pekko Persistence](eventsourced.md)
+* @ref:[State changes from Apache Pekko Persistence](durable-state.md)
 * @ref:[Messages from Kafka](kafka.md)
 * @ref:[Offset in Cassandra](cassandra.md)
 * @ref:[Offset in a relational DB with JDBC](jdbc.md)
@@ -58,7 +58,7 @@ All of them share a dependency to `pekko-projection-core`:
 
 ### Akka version
 
-Akka Projections requires **Akka $akka.version$** or later. See [Akka's Binary 
Compatibility 
Rules](https://doc.akka.io/docs/akka/current/common/binary-compatibility-rules.html)
 for details.
+Apache Pekko Projections requires **Akka $akka.version$** or later. See 
[Akka's Binary Compatibility 
Rules](https://pekko.apache.org/docs/pekko/current/common/binary-compatibility-rules.html)
 for details.
 
 It is recommended to use the latest patch version of Akka. 
 It is important all Akka dependencies are in the same version, so it is 
recommended to depend on
@@ -67,7 +67,7 @@ them explicitly to avoid problems with transient dependencies 
causing an unlucky
 @@dependency[sbt,Gradle,Maven] {
   symbol=AkkaVersion
   value=$akka.version$
-  group=com.typesafe.akka
+  group=org.apache.pekko
   artifact=pekko-cluster-sharding-typed_$scala.binary.version$
   version=AkkaVersion
   group2=com.typesafe.akka
@@ -88,13 +88,13 @@ See the individual modules for their transitive 
dependencies.
 
 ### Akka Classic
 
-Akka Projections can be used with the [new Actor 
API](https://doc.akka.io/docs/akka/current/typed/actors.html) or
-the [classic Actor 
API](https://doc.akka.io/docs/akka/current/index-classic.html). The 
documentation samples
+Apache Pekko Projections can be used with the [new Actor 
API](https://pekko.apache.org/docs/pekko/current/typed/actors.html) or
+the [classic Actor 
API](https://pekko.apache.org/docs/pekko/current/index-classic.html). The 
documentation samples
 show the new Actor API, and the @ref:[Akka Classic page](classic.md) 
highlights how to use it with the classic
 Actor API.
 
 ## Contributing
 
-Please feel free to contribute to Akka and Akka Projections by reporting 
issues you identify, or by suggesting changes to the code. Please refer to our 
[contributing 
instructions](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) to 
learn how it can be done.
+Please feel free to contribute to Akka and Apache Pekko Projections by 
reporting issues you identify, or by suggesting changes to the code. Please 
refer to our [contributing 
instructions](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) to 
learn how it can be done.
 
 We want Akka to strive in a welcoming and open atmosphere and expect all 
contributors to respect our [code of 
conduct](https://www.lightbend.com/conduct).
diff --git a/docs/src/main/paradox/projection-settings.md 
b/docs/src/main/paradox/projection-settings.md
index 2af674d..636ae0a 100644
--- a/docs/src/main/paradox/projection-settings.md
+++ b/docs/src/main/paradox/projection-settings.md
@@ -1,6 +1,6 @@
 # Projection Settings
 
-A Projection is a background process that continuously consume event envelopes 
from a `Source`. Therefore, in case of failures, it is automatically restarted. 
This is done by automatically wrapping the `Source` with a [RestartSource with 
backoff on 
failures](https://doc.akka.io/docs/akka/current/stream/operators/RestartSource/onFailuresWithBackoff.html#restartsource-onfailureswithbackoff).
+A Projection is a background process that continuously consume event envelopes 
from a `Source`. Therefore, in case of failures, it is automatically restarted. 
This is done by automatically wrapping the `Source` with a [RestartSource with 
backoff on 
failures](https://pekko.apache.org/docs/pekko/current/stream/operators/RestartSource/onFailuresWithBackoff.html#restartsource-onfailureswithbackoff).
 
 By default, the backoff configuration defined in the reference configuration 
is used. Those values can be overriden in the `application.conf` file or 
programatically as shown below.
 
diff --git a/docs/src/main/paradox/running.md b/docs/src/main/paradox/running.md
index 3a75251..37cf002 100644
--- a/docs/src/main/paradox/running.md
+++ b/docs/src/main/paradox/running.md
@@ -1,26 +1,26 @@
 # Running a Projection
 
-Once you have decided how you want to build your projection, the next step is 
to run it. Typically, you run it in a distributed fashion in order to spread 
the load over the different nodes in an Akka Cluster. However, it's also 
possible to run it as a single instance (when not clustered) or as single 
instance in a Cluster Singleton.
+Once you have decided how you want to build your projection, the next step is 
to run it. Typically, you run it in a distributed fashion in order to spread 
the load over the different nodes in an Apache Pekko Cluster. However, it's 
also possible to run it as a single instance (when not clustered) or as single 
instance in a Cluster Singleton.
 
 ## Dependencies
 
-To distribute the projection over the cluster we recommend the use of 
[ShardedDaemonProcess](https://doc.akka.io/docs/akka/current/typed/cluster-sharded-daemon-process.html).
 Add the following dependency in your project if not yet using Akka Cluster 
Sharding:
+To distribute the projection over the cluster we recommend the use of 
[ShardedDaemonProcess](https://pekko.apache.org/docs/pekko/current/typed/cluster-sharded-daemon-process.html).
 Add the following dependency in your project if not yet using Apache Pekko 
Cluster Sharding:
 
 @@dependency [sbt,Maven,Gradle] {
-  group=com.typesafe.akka
+  group=org.apache.pekko
   artifact=pekko-cluster-sharding-typed_$scala.binary.version$
   version=$akka.version$
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
-For more information on using Akka Cluster consult Akka's reference 
documentation on [Akka 
Cluster](https://doc.akka.io/docs/akka/current/typed/index-cluster.html) and 
[Akka Cluster 
Sharding](https://doc.akka.io/docs/akka/current/typed/cluster-sharding.html).
+For more information on using Apache Pekko Cluster consult Akka's reference 
documentation on [Apache Pekko 
Cluster](https://pekko.apache.org/docs/pekko/current/typed/index-cluster.html) 
and [Apache Pekko Cluster 
Sharding](https://pekko.apache.org/docs/pekko/current/typed/cluster-sharding.html).
 
 ## Running with Sharded Daemon Process
 
 The Sharded Daemon Process can be used to distribute `n` instances of a given 
Projection across the cluster. Therefore, it's important that each Projection 
instance consumes a subset of the stream of envelopes.
 
-How the subset is created depends on the kind of source we consume. If it's an 
Alpakka Kafka source, this is done by Kafka consumer groups. When consuming 
from Akka Persistence Journal, the events must be sliced by tagging them as 
demonstrated in the example below.
+How the subset is created depends on the kind of source we consume. If it's an 
Alpakka Kafka source, this is done by Kafka consumer groups. When consuming 
from Apache Pekko Persistence Journal, the events must be sliced by tagging 
them as demonstrated in the example below.
 
 ### Tagging Events in EventSourcedBehavior
 
@@ -44,16 +44,14 @@ planned maximum number of cluster nodes. It doesn't have to 
be exact.
 We will use those tags to query the journal and create as many Projections 
instances, and distribute them in the cluster.
 
 @@@ warning
-When using [Akka Persistence Cassandra 
plugin](https://doc.akka.io/docs/akka-persistence-cassandra/current/) you should
+When using [Apache Pekko Persistence Cassandra 
plugin](https://doc.akka.io/docs/akka-persistence-cassandra/current/) you should
 not use too many tags for each event. Each tag will result in a copy of the 
event in a separate table and
 that can impact write performance. Typically, you would use 1 tag per event as 
illustrated here. Additional
 filtering of events can be done in the Projection handler if it doesn't have 
to act on certain events.
-The [JDBC plugin](https://doc.akka.io/docs/akka-persistence-jdbc/current/) and
-[Spanner plugin](https://doc.akka.io/docs/akka-persistence-spanner/current/)
-don't have this constraint.
+The [JDBC plugin](https://doc.akka.io/docs/akka-persistence-jdbc/current/) 
doesn't have this constraint.
 @@@
 
-See also the [Akka reference documentation for 
tagging](https://doc.akka.io/docs/akka/current/typed/persistence.html#tagging).
+See also the [Apache Pekko reference documentation for 
tagging](https://pekko.apache.org/docs/pekko/current/typed/persistence.html#tagging).
 
 ### Event Sourced Provider per tag
 
@@ -100,7 +98,7 @@ For graceful stop it is recommended to use 
@scala[`ProjectionBehavior.Stop`]@jav
 ## Running with local Actor
 
 You can spawn the `ProjectionBehavior` as any other `Behavior`. This can be 
useful for testing or when running
-a local `ActorSystem` without Akka Cluster.
+a local `ActorSystem` without Apache Pekko Cluster.
 
 Scala
 :  @@snip 
[CassandraProjectionDocExample.scala](/examples/src/it/scala/docs/cassandra/CassandraProjectionDocExample.scala)
 { #running-with-actor }
@@ -114,7 +112,7 @@ overwrite each others offset storage with undefined and 
unpredictable results.
 ## Running in Cluster Singleton
 
 If you know that you only need one or a few projection instances an 
alternative to @ref:[Sharded Daemon 
Process](#running-with-sharded-daemon-process)
-is to use [Akka Cluster 
Singleton](https://doc.akka.io/docs/akka/current/typed/cluster-singleton.html)  
+is to use [Apache Pekko Cluster 
Singleton](https://pekko.apache.org/docs/pekko/current/typed/cluster-singleton.html)
  
 
 Scala
 :  @@snip 
[CassandraProjectionDocExample.scala](/examples/src/it/scala/docs/cassandra/CassandraProjectionDocExample.scala)
 { #running-with-singleton }
diff --git a/docs/src/main/paradox/slick.md b/docs/src/main/paradox/slick.md
index d45d36b..21b3e0a 100644
--- a/docs/src/main/paradox/slick.md
+++ b/docs/src/main/paradox/slick.md
@@ -6,13 +6,13 @@ The @apidoc[SlickProjection$] has support for storing the 
offset in a relational
 used with Scala.
 
 @@@ warning
-The Slick module in Akka Projections is 
[community-driven](https://developer.lightbend.com/docs/introduction/getting-help/support-terminology.html#community-driven)
+The Slick module in Apache Pekko Projections is 
[community-driven](https://developer.lightbend.com/docs/introduction/getting-help/support-terminology.html#community-driven)
 and not included in Lightbend support.
-Prefer using the @ref[JDBC module](jdbc.md) to implement your projection 
handler. Slick support in Akka Projections is meant for users 
+Prefer using the @ref[JDBC module](jdbc.md) to implement your projection 
handler. Slick support in Apache Pekko Projections is meant for users 
 migrating from [`Lagom's Slick 
ReadSideProcessor`](https://www.lagomframework.com/documentation/1.6.x/scala/ReadSideSlick.html).
 @@@
 
-The source of the envelopes can be @ref:[events from Akka 
Persistence](eventsourced.md) or any other `SourceProvider`
+The source of the envelopes can be @ref:[events from Apache Pekko 
Persistence](eventsourced.md) or any other `SourceProvider`
 with supported @ref:[offset types](#offset-types).
 
 The envelope handler returns a `DBIO` that will be run by the projection. This 
means that the target database
@@ -21,7 +21,7 @@ processing semantics is supported. It also offers 
@ref:[at-least-once](#at-least
 
 ## Dependencies
 
-To use the Slick module of Akka Projections add the following dependency in 
your project:
+To use the Slick module of Apache Pekko Projections add the following 
dependency in your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -29,7 +29,7 @@ To use the Slick module of Akka Projections add the following 
dependency in your
   version=$project.version$
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-slick" }
 
@@ -141,13 +141,13 @@ Same type of handlers can be used with `SlickProjection` 
instead of `CassandraPr
 
 ### Actor handler
 
-A good alternative for advanced state management is to implement the handler 
as an [actor](https://doc.akka.io/docs/akka/current/typed/actors.html),
+A good alternative for advanced state management is to implement the handler 
as an [actor](https://pekko.apache.org/docs/pekko/current/typed/actors.html),
 which is described in @ref:[Processing with Actor](actor.md).
 
 ### Flow handler
 
-An Akka Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
-which is described in @ref:[Processing with Akka Streams](flow.md).
+An Apache Pekko Streams `FlowWithContext` can be used instead of a handler for 
processing the envelopes,
+which is described in @ref:[Processing with Apache Pekko Streams](flow.md).
 
 ### Handler lifecycle
 
@@ -196,14 +196,12 @@ akka.projection.slick.offset-store {
 
 The supported offset types of the `SlickProjection` are:
 
-* @apidoc[akka.persistence.query.Offset] types from @ref:[events from Akka 
Persistence](eventsourced.md)
+* @apidoc[akka.persistence.query.Offset] types from @ref:[events from Apache 
Pekko Persistence](eventsourced.md)
 * @apidoc[MergeableOffset] that is used for @ref:[messages from 
Kafka](kafka.md#mergeable-offset)
 * `String`
 * `Int`
 * `Long`
 * Any other type that has a configured Akka Serializer is stored with base64 
encoding of the serialized bytes.
-  For example the [Akka Persistence 
Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/) offset
-  is supported in this way.
 
 ## Configuration
 
diff --git a/docs/src/main/paradox/testing.md b/docs/src/main/paradox/testing.md
index c1e0699..85254b6 100644
--- a/docs/src/main/paradox/testing.md
+++ b/docs/src/main/paradox/testing.md
@@ -1,10 +1,10 @@
 # Testing
 
-Akka Projections provides a TestKit to ease testing. There are two supported 
styles of test: running with an assert function and driving it with an Akka 
Streams TestKit `TestSubscriber.Probe`.
+Apache Pekko Projections provides a TestKit to ease testing. There are two 
supported styles of test: running with an assert function and driving it with 
an Apache Pekko Streams TestKit `TestSubscriber.Probe`.
 
 ## Dependencies
 
-To use the Akka Projections TestKit add the following dependency in your 
project:
+To use the Apache Pekko Projections TestKit add the following dependency in 
your project:
 
 @@dependency [sbt,Maven,Gradle] {
   group=org.apache.pekko
@@ -13,7 +13,7 @@ To use the Akka Projections TestKit add the following 
dependency in your project
   scope="test"
 }
 
-Akka Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
+Apache Pekko Projections require Akka $akka.version$ or later, see @ref:[Akka 
version](overview.md#akka-version).
 
 @@project-info{ projectId="pekko-projection-testkit" }
 
@@ -58,7 +58,7 @@ Java
 
 ## Testing with a TestSubscriber.Probe
 
-The [Akka Stream 
TestKit](https://doc.akka.io/docs/akka/current/stream/stream-testkit.html#using-the-testkit)
 can be used to drive the pace of envelopes flowing through the Projection.
+The [Apache Pekko Stream 
TestKit](https://pekko.apache.org/docs/pekko/current/stream/stream-testkit.html#using-the-testkit)
 can be used to drive the pace of envelopes flowing through the Projection.
 
 The Projection starts as soon as the first element is requested by the 
`TestSubscriber.Probe`, new elements will be emitted as requested. The 
Projection is stopped once the assert function completes.
 
@@ -71,7 +71,7 @@ Java
 ## Testing with mocked Projection and SourceProvider
 
 To test a handler in isolation you may want to mock out the implementation of 
a Projection or SourceProvider so that you don't have to setup and teardown the 
associated technology as part of your _integration_ test.
-For example, you may want to project against a Cassandra database, or read 
envelopes from an Akka Persistence journal source, but you don't want to have 
to run Docker containers or embedded/in-memory services just to run your tests.
+For example, you may want to project against a Cassandra database, or read 
envelopes from an Apache Pekko Persistence journal source, but you don't want 
to have to run Docker containers or embedded/in-memory services just to run 
your tests.
 The @apidoc[TestProjection] allows you to isolate the runtime of your handler 
so that you don't need to run these services.
 Using a `TestProjection` has the added benefit of being fast, since you can 
run everything within the JVM that runs your tests.
 
diff --git a/docs/src/main/paradox/use-cases.md 
b/docs/src/main/paradox/use-cases.md
index 5ef68d5..6b7f08b 100644
--- a/docs/src/main/paradox/use-cases.md
+++ b/docs/src/main/paradox/use-cases.md
@@ -1,6 +1,6 @@
 # Use Cases
 
-Akka Projections is intended for the following primary use cases. It is not 
limited to these use cases,
+Apache Pekko Projections is intended for the following primary use cases. It 
is not limited to these use cases,
 because it is designed to be flexible in the way different sources and targets 
of the projections can be
 composed.  
 
diff --git a/examples/src/test/java/jdocs/guide/EventGeneratorApp.java 
b/examples/src/test/java/jdocs/guide/EventGeneratorApp.java
index 6f540c8..cc254d5 100644
--- a/examples/src/test/java/jdocs/guide/EventGeneratorApp.java
+++ b/examples/src/test/java/jdocs/guide/EventGeneratorApp.java
@@ -175,7 +175,7 @@ class Guardian {
   /**
    * An Actor that persists shopping cart events for a particular persistence 
id (cart id) and tag.
    * This is not how real Event Sourced actors should be be implemented. 
Please look at
-   * https://doc.akka.io/docs/akka/current/typed/persistence.html for more 
information about
+   * https://pekko.apache.org/docs/pekko/current/typed/persistence.html for 
more information about
    * `EventSourcedBehavior`.
    */
   static class CartPersistentBehavior
diff --git a/examples/src/test/scala/docs/guide/EventGeneratorApp.scala 
b/examples/src/test/scala/docs/guide/EventGeneratorApp.scala
index 7a45f1d..42fd556 100644
--- a/examples/src/test/scala/docs/guide/EventGeneratorApp.scala
+++ b/examples/src/test/scala/docs/guide/EventGeneratorApp.scala
@@ -120,7 +120,7 @@ object EventGeneratorApp extends App {
   /**
    * Construct an Actor that persists shopping cart events for a particular 
persistence id (cart id) and tag.
    * This is not how real Event Sourced actors should be be implemented. 
Please look at
-   * https://doc.akka.io/docs/akka/current/typed/persistence.html for more 
information about `EventSourcedBehavior`.
+   * https://pekko.apache.org/docs/pekko/current/typed/persistence.html for 
more information about `EventSourcedBehavior`.
    */
   def cartBehavior(persistenceId: String, tag: String): Behavior[Event] =
     Behaviors.setup { ctx =>
diff --git a/projection-core/src/main/resources/reference.conf 
b/projection-core/src/main/resources/reference.conf
index 9da9d49..27d4365 100644
--- a/projection-core/src/main/resources/reference.conf
+++ b/projection-core/src/main/resources/reference.conf
@@ -6,7 +6,7 @@ akka.projection {
   # The configuration to use to restart the projection after an underlying 
streams failure
   # The Akka streams restart source is used to facilitate this behaviour
   # See the streams documentation for more details
-  # 
https://doc.akka.io/docs/akka/current/stream/stream-error.html#delayed-restarts-with-a-backoff-operator
+  # 
https://pekko.apache.org/docs/pekko/current/stream/stream-error.html#delayed-restarts-with-a-backoff-operator
   restart-backoff {
     min-backoff = 3s
     max-backoff = 30s


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to