This is an automated email from the ASF dual-hosted git repository.

aloalt pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-wayang-website.git


The following commit(s) were added to refs/heads/main by this push:
     new bb7ff63f Update features.md (#76)
bb7ff63f is described below

commit bb7ff63fb0123daf4e751f0eb831780b124e4ea5
Author: Paul King <[email protected]>
AuthorDate: Thu Feb 27 21:23:06 2025 +1100

    Update features.md (#76)
    
    fix typos
---
 docs/introduction/features.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/introduction/features.md b/docs/introduction/features.md
index 03d1fe3c..533b4994 100644
--- a/docs/introduction/features.md
+++ b/docs/introduction/features.md
@@ -19,16 +19,16 @@ Apache Wayang offers a collection of operators that 
applications utilize to defi
 ### Cost Saving
 Developers can focus on building their applications without the need to 
understand the complexities of underlying platforms. This simplifies the 
development process and removes the requirement for developers to be experts in 
big data infrastructures. Apache Wayang automatically determines the most 
suitable data processing platforms for specific tasks and deploys applications 
accordingly.
 
-### Additonal Features
+### Additional Features
 - Zero-copy: cross-platform in-situ data processing
 - High performance: A highly-extensible API framework to generate DAG based 
federated execution plane at runtime to speed up data processing, providing 
50-180x speed up by:
-  - reduce data movement to single platforms.
+  - reduce data movement to single platforms
   - reduce ETL overhead to perform large-scale analytics
   - reduce data duplication and storage
   - execute data processing on the best available technology, including local 
JVM stream processing
 - Data application agnosticity
-  - run data tasks on multiple platforms without re-platforming the code (move 
jobs from Hadoop to Spark without changinf the code et large)
-  - implement a sustainable AI stratgy by using data pools in-situ
+  - run data tasks on multiple platforms without re-platforming the code (move 
jobs from Hadoop to Spark without changing the code at large)
+  - implement a sustainable AI strategy by using data pools in-situ
 - Enterprise ready federated learning (in development)
 <br /><br />
 

Reply via email to