This is an automated email from the ASF dual-hosted git repository.

dgrove pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/openwhisk.git


The following commit(s) were added to refs/heads/master by this push:
     new db2b1c6fe Fix spelling (#5518)
db2b1c6fe is described below

commit db2b1c6fe4062756c65eff00ced0c6b15ab395b7
Author: John Bampton <[email protected]>
AuthorDate: Tue Oct 22 02:14:14 2024 +1000

    Fix spelling (#5518)
---
 .github/workflows/README.md                                  | 12 ++++++------
 ansible/group_vars/all                                       |  2 +-
 ansible/publish.yml                                          |  2 +-
 common/scala/src/main/resources/application.conf             | 12 ++++++------
 .../org/apache/openwhisk/core/ack/MessagingActiveAck.scala   |  2 +-
 .../scala/org/apache/openwhisk/core/connector/Message.scala  |  4 ++--
 .../org/apache/openwhisk/core/containerpool/Container.scala  |  2 +-
 tools/owperf/README.md                                       | 12 ++++++------
 8 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/.github/workflows/README.md b/.github/workflows/README.md
index dbe6c7089..6a9e1bf00 100644
--- a/.github/workflows/README.md
+++ b/.github/workflows/README.md
@@ -2,7 +2,7 @@
 
 There are a few [GitHub 
secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) 
to configure to fully leverage the build.
 
-You can use and set the followings secrets also in your fork.
+You can use and set the following secrets also in your fork.
 
 ## Ngrok Debugging
 
@@ -10,12 +10,12 @@ You can debug a GitHub Action build using 
[NGROK](https://ngrok.com/).
 
 It is disabled for automated build triggered by push and pull_requests.
 
-You can trigger a workflow run manually  enabling ngrok debugging.
+You can trigger a workflow run manually enabling ngrok debugging.
 
 It will open an ssh connection to the VM and keep it up and running for one 
hour.
-The connection url is showns in the log for debugAction.sh
+The connection URL is shown in the log for debugAction.sh
 
-You can then connect to the build vm, and debug it.
+You can then connect to the build VM, and debug it.
 You need to use a password of your choice to access it.
 
 You can continue the build with `touch /tmp/continue`.
@@ -30,7 +30,7 @@ Then set the following secrets:
 
 ## Log Upload
 
-The build uploads the logs to an s3 bucket allowing to inspect them with a 
browser.
+The build uploads the logs to a S3 bucket allowing to inspect them with a 
browser.
 
 You need to create the bucket with the following commands:
 
@@ -53,4 +53,4 @@ To enable upload to the created bucket you need to set the 
following secrets:
 
 If you want to get notified of what happens on slack, create an [Incoming Web 
Hook](https://api.slack.com/messaging/webhooks) and then set the following 
secret:
 
-- `SLACK_WEBHOOK`: the incoming webhook url provided by slack.
+- `SLACK_WEBHOOK`: the incoming webhook URL provided by slack.
diff --git a/ansible/group_vars/all b/ansible/group_vars/all
index 6a260daa2..a8ca41bca 100644
--- a/ansible/group_vars/all
+++ b/ansible/group_vars/all
@@ -273,7 +273,7 @@ nginx:
 
 # These are the variables to define all database relevant settings.
 # The authKeys are the users, that are initially created to use OpenWhisk.
-# The keys are stored in ansible/files and will be inserted into the 
authentication databse.
+# The keys are stored in ansible/files and will be inserted into the 
authentication database.
 # The key db.whisk.actions is the name of the database where all artifacts of 
the user are stored. These artifacts are actions, triggers, rules and packages.
 # The key db.whisk.activation is the name of the database where all 
activations are stored.
 # The key db.whisk.auth is the name of the authentication database where all 
keys of all users are stored.
diff --git a/ansible/publish.yml b/ansible/publish.yml
index 505da973c..73dfded09 100644
--- a/ansible/publish.yml
+++ b/ansible/publish.yml
@@ -16,7 +16,7 @@
 #
 ---
 # This playbook updates CLIs and SDKs on an existing edge host.
-# Artifacts get built and published to NGINX. This assumes an already running 
egde host in an Openwhisk deployment.
+# Artifacts get built and published to NGINX. This assumes an already running 
edge host in an Openwhisk deployment.
 
 - hosts: edge
   roles:
diff --git a/common/scala/src/main/resources/application.conf 
b/common/scala/src/main/resources/application.conf
index 140d4c4b0..5c98c4ddc 100644
--- a/common/scala/src/main/resources/application.conf
+++ b/common/scala/src/main/resources/application.conf
@@ -149,7 +149,7 @@ whisk {
             acks = 1
             request-timeout-ms = 30000
             metadata-max-age-ms = 15000
-            # max-request-size is defined programatically for producers 
related to the "completed" and "invoker" topics
+            # max-request-size is defined programmatically for producers 
related to the "completed" and "invoker" topics
             # as ${whisk.activation.kafka.payload.max} + 
${whisk.activation.kafka.serdes-overhead}. All other topics use
             # the default of 1 MB.
         }
@@ -182,14 +182,14 @@ whisk {
                 segment-bytes   =  536870912
                 retention-bytes = 1073741824
                 retention-ms    = 3600000
-                # max-message-bytes is defined programatically as 
${whisk.activation.kafka.payload.max} +
+                # max-message-bytes is defined programmatically as 
${whisk.activation.kafka.payload.max} +
                 # ${whisk.activation.kafka.serdes-overhead}.
             }
             creationAck {
                 segment-bytes   =  536870912
                 retention-bytes = 1073741824
                 retention-ms    = 3600000
-                # max-message-bytes is defined programatically as 
${whisk.activation.kafka.payload.max} +
+                # max-message-bytes is defined programmatically as 
${whisk.activation.kafka.payload.max} +
                 # ${whisk.activation.kafka.serdes-overhead}.
             }
             health {
@@ -201,7 +201,7 @@ whisk {
                 segment-bytes     =  536870912
                 retention-bytes   = 1073741824
                 retention-ms      =  172800000
-                # max-message-bytes is defined programatically as 
${whisk.activation.kafka.payload.max} +
+                # max-message-bytes is defined programmatically as 
${whisk.activation.kafka.payload.max} +
                 # ${whisk.activation.kafka.serdes-overhead}.
             }
             events {
@@ -586,9 +586,9 @@ whisk {
         cache-expiry = 30 seconds #how long to keep spans in cache. Set to 
appropriate value to trace long running requests
         #Zipkin configuration. Uncomment following to enable zipkin based 
tracing
         #zipkin {
-        #   url = "http://localhost:9411"; //url to connecto to zipkin server
+        #   url = "http://localhost:9411"; //URL to connect to zipkin server
              //sample-rate to decide a request is sampled or not.
-             //sample-rate 0.5 eqauls to sampling 50% of the requests
+             //sample-rate 0.5 equals to sampling 50% of the requests
              //sample-rate of 1 means 100% sampling.
              //sample-rate of 0 means no sampling
         #   sample-rate = "0.01" // sample 1% of requests by default
diff --git 
a/common/scala/src/main/scala/org/apache/openwhisk/core/ack/MessagingActiveAck.scala
 
b/common/scala/src/main/scala/org/apache/openwhisk/core/ack/MessagingActiveAck.scala
index a0beb1cfe..0ff5a16f4 100644
--- 
a/common/scala/src/main/scala/org/apache/openwhisk/core/ack/MessagingActiveAck.scala
+++ 
b/common/scala/src/main/scala/org/apache/openwhisk/core/ack/MessagingActiveAck.scala
@@ -61,7 +61,7 @@ class MessagingActiveAck(producer: MessageProducer, instance: 
InstanceId, eventS
 
     // An acknowledgement containing the result is only needed for blocking 
invokes in order to further the
     // continuation. A result message for a non-blocking activation is not 
actually registered in the load balancer
-    // and the container proxy should not send such an acknowlegement unless 
it's a blocking request. Here the code
+    // and the container proxy should not send such an acknowledgement unless 
it's a blocking request. Here the code
     // is defensive and will shrink all non-blocking acknowledgements.
     send(if (blockingInvoke) acknowledgement else 
acknowledgement.shrink).recoverWith {
       case t if t.getCause.isInstanceOf[RecordTooLargeException] =>
diff --git 
a/common/scala/src/main/scala/org/apache/openwhisk/core/connector/Message.scala 
b/common/scala/src/main/scala/org/apache/openwhisk/core/connector/Message.scala
index 85acdd8e0..51e734254 100644
--- 
a/common/scala/src/main/scala/org/apache/openwhisk/core/connector/Message.scala
+++ 
b/common/scala/src/main/scala/org/apache/openwhisk/core/connector/Message.scala
@@ -115,7 +115,7 @@ abstract class AcknowledgementMessage(private val tid: 
TransactionId) extends Me
  * combines the `CompletionMessage` and `ResultMessage`. The `response` may be 
an `ActivationId` to allow for failures
  * to send the activation result because of event-bus size limitations.
  *
- * The constructor is private so that callers must use the more restrictive 
constructors which ensure the respose is always
+ * The constructor is private so that callers must use the more restrictive 
constructors which ensure the response is always
  * Right when this message is created.
  */
 case class CombinedCompletionAndResultMessage private (override val transid: 
TransactionId,
@@ -167,7 +167,7 @@ case class CompletionMessage private (override val transid: 
TransactionId,
  * This is part of a split phase notification, and does not indicate that the 
slot is available, which is indicated with
  * a `CompletionMessage`. Note that activation record will not contain any 
logs from the action execution, only the result.
  *
- * The constructor is private so that callers must use the more restrictive 
constructors which ensure the respose is always
+ * The constructor is private so that callers must use the more restrictive 
constructors which ensure the response is always
  * Right when this message is created.
  */
 case class ResultMessage private (override val transid: TransactionId, 
response: Either[ActivationId, WhiskActivation])
diff --git 
a/common/scala/src/main/scala/org/apache/openwhisk/core/containerpool/Container.scala
 
b/common/scala/src/main/scala/org/apache/openwhisk/core/containerpool/Container.scala
index 1df29e055..7e22560f3 100644
--- 
a/common/scala/src/main/scala/org/apache/openwhisk/core/containerpool/Container.scala
+++ 
b/common/scala/src/main/scala/org/apache/openwhisk/core/containerpool/Container.scala
@@ -139,7 +139,7 @@ trait Container {
             endTime = r.interval.end,
             logLevel = InfoLevel)
         case Failure(t) =>
-          transid.failed(this, start, s"initializiation failed with $t")
+          transid.failed(this, start, s"initialization failed with $t")
       }
       .flatMap { result =>
         // if runtime container is shutting down, reschedule the activation 
message
diff --git a/tools/owperf/README.md b/tools/owperf/README.md
index ccf79b636..3fb1c42b0 100644
--- a/tools/owperf/README.md
+++ b/tools/owperf/README.md
@@ -26,7 +26,7 @@ This test tool benchmarks an OpenWhisk deployment for (warm) 
latency and through
    1. Parameter size - controls the size of the parameter passed to the action 
or event
    1. Actions per iteration (a.k.a. _ratio_) - controls how many rules are 
associated with a trigger [for rules] or how many actions are asynchronously 
invoked (burst size) at each iteration of a test worker [for actions].
 1. "Master apart" mode - Allow the master client to perform latency 
measurements while the worker clients stress OpenWhisk using a specific 
invocation pattern in the background. Useful for measuring latency under load, 
and for comparing latencies of rules and actions under load.
-The tool is written in node.js, using mainly the modules of OpenWhisk client, 
cluster for concurrency, and commander for CLI procssing.
+The tool is written in node.js, using mainly the modules of OpenWhisk client, 
cluster for concurrency, and commander for CLI processing.
 
 ### Operation
 The general operation of a test is simple:
@@ -39,11 +39,11 @@ The general operation of a test is simple:
 
 Final results are written to the standard output stream (so can be redirected 
to a file) as a single highly-detailed CSV record containing all the input 
settings and the output measurements (see below). There is additional control 
information that is written to the standard error stream and can be silenced in 
CLI. The control information also contains the CSV header, so it can be copied 
into a spreadsheet if needed.
 
-It is possible to invoke the tool in "Master apart" mode, where the master 
client is invoking a different activity than the workers, and at possibly a 
different (very likely, much slower) rate. In this mode, latency statsitics are 
computed based solely on the master's data, since the worker's activity is used 
only as background to stress the OpenWhisk deployment. So one experiment can 
have the master client invoke rules and another one can have the master client 
invoke actions, while in  [...]
+It is possible to invoke the tool in "Master apart" mode, where the master 
client is invoking a different activity than the workers, and at possibly a 
different (very likely, much slower) rate. In this mode, latency statistics are 
computed based solely on the master's data, since the worker's activity is used 
only as background to stress the OpenWhisk deployment. So one experiment can 
have the master client invoke rules and another one can have the master client 
invoke actions, while in  [...]
 
 The tool is highly customizable via CLI options. All the independent test 
variables are controlled via CLI. This includes number of workers, invocation 
pattern, OW client configuration, test action sleep time, etc.
 
-Test setup and teardown can be independently skipped via CLI, and/or directly 
invoked from the external setup script (```setup.sh```), so that setup can be 
shared between multiple tests. More advanced users can replace the test action 
with a custom action in the setup script to benchmark action invocation or 
event-respose throughput and latency of specific applications.
+Test setup and teardown can be independently skipped via CLI, and/or directly 
invoked from the external setup script (```setup.sh```), so that setup can be 
shared between multiple tests. More advanced users can replace the test action 
with a custom action in the setup script to benchmark action invocation or 
event-response throughput and latency of specific applications.
 
 **Clock skew**: OpenWhisk is a distributed system, which means that clock skew 
is expected between the client machine computing invocation timestamps and the 
controllers or invokers that generate the timestamps in the activation records. 
However, this tool assumes that clock skew is bound at few msec range, due to 
having all machines clocks synchronized, typically using NTP. At such a scale, 
clock skew is quite small compared to the measured time periods. Some of the 
time periods are mea [...]
 
@@ -67,7 +67,7 @@ The following time-stamps are collected for each invocation, 
of either action, o
 * **TS** (Trigger Start) - taken from the activation record of the trigger 
linked to the rules, so applies only to rule tests. All actions invoked by the 
rules of the same trigger have the same TS value.
 * **AS** (Action Start) - taken from the activation record of the action.
 * **AE** (Action End) - taken from the activation record of the action.
-* **AI** (After Invocation) - taken by the client immmediately after the 
invocation, for blocking action invocation tests only.
+* **AI** (After Invocation) - taken by the client immediately after the 
invocation, for blocking action invocation tests only.
 
 Based on these timestamps, the following measurements are taken:
 * **OEA** (Overhead of Entering Action) - OpenWhisk processing overhead from 
sending the action invocation or trigger fire to the beginning of the action 
execution. OEA = AS-BI
@@ -77,7 +77,7 @@ Based on these timestamps, the following measurements are 
taken:
 * **TA** (Trigger to Answer) - the processing time from the start of the 
trigger process to the start of the action (rule tests only). TA = AS-TS
 * **ORA** (Overhead of Returning from Action) - time from action end till 
being received by the client (blocking action tests only). ORA = AI - AE
 * **RTT** (Round Trip Time) - time at the client from action invocation till 
reply received (blocking action tests only). RTT = AI - BI
-* **ORTT** (Overhead of RTT) - RTT at the client exclugin the net action 
computation time. ORTT = RTT - D
+* **ORTT** (Overhead of RTT) - RTT at the client excluding the net action 
computation time. ORTT = RTT - D
 
 For each measurement, the tool computes average (_avg_), standard deviation 
(_std_), and extremes (_min_ and _max_).
 
@@ -92,7 +92,7 @@ Throughput is measured w.r.t. several different counters. 
During post-processing
 * **Activations** - number of completed activations inside the time frame, 
counting both trigger activations (based on TS), and action activations (based 
on AS and AE).
 * **Invocations** - number of successful invocations of complete rules or 
actions (depending on the activity). This is the "service rate" of invocations 
(assuming errors happen only because OW is overloaded).
 
-For each counter, the tool reports the total counter value (_abs_), total 
throughput per second (_tp_), througput of the worker clients without the 
master (_tpw_) and the master's percentage of throughput relative to workers 
(_tpd_). The last two values are important mostly for master apart mode.
+For each counter, the tool reports the total counter value (_abs_), total 
throughput per second (_tp_), throughput of the worker clients without the 
master (_tpw_) and the master's percentage of throughput relative to workers 
(_tpd_). The last two values are important mostly for master apart mode.
 
 Aside from that, the tool also counts **errors**. Failed invocations - of 
actions, of triggers, or of actions from triggers (via rules) are counted each 
as an error. The tool reports both absolute error count (_abs_) and percent out 
of requests (_percent_).
 

Reply via email to