This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 2e33071  Add Data Mechanics to Powered By
2e33071 is described below

commit 2e330710c855f4292cc066c60874a32385a60fb6
Author: Jean-Yves Stephan <jean-y...@datamechanics.co>
AuthorDate: Wed Nov 18 12:54:59 2020 -0600

    Add Data Mechanics to Powered By
    
    Data Mechanics is a managed Spark platform that can be deployed on a 
Kubernetes cluster inside our customers cloud accounts. We'd love to be on the 
Powered By Spark page (along other Spark platforms).
    
    We contribute to open source projets in the Spark ecosystem (Spark on 
Kubernetes operator, Data Mechanics Delight). We also use Spark internally for 
our recommendation engine and logs processing.
    
    I tried to be objective / avoid marketing in the description, but I'm open 
to feedback on changing it. Thanks!
    
    Author: Jean-Yves Stephan <jean-y...@datamechanics.co>
    
    Closes #299 from jystephan/datamechanics-poweredby.
---
 powered-by.md        | 6 ++++++
 site/powered-by.html | 9 +++++++++
 2 files changed, 15 insertions(+)

diff --git a/powered-by.md b/powered-by.md
index 150d402..d314c88 100644
--- a/powered-by.md
+++ b/powered-by.md
@@ -88,6 +88,12 @@ and external data sources, driving holistic and actionable 
insights.
   - We provided a <a href="https://www.databricks.com/product";>cloud-optimized 
platform</a>
     to run Spark and ML applications on Amazon Web Services and Azure, as well 
as a comprehensive
     <a href="https://databricks.com/training";>training program</a>.
+- <a href="https://www.datamechanics.co";>Data Mechanics</a>
+  - Data Mechanics is a cloud-native Spark platform that can be deployed on a 
Kubernetes cluster
+    inside its customers AWS, GCP, or Azure cloud environments.
+  - Our focus is to make Spark easy-to-use and cost-effective for data 
engineering workloads.
+    We also develop the free, cross-platform, and partially open-source Spark 
monitoring tool 
+    <a href="https://www.datamechanics.co/delight";>Data Mechanics Delight.</a> 
 
 - <a href="https://datapipelines.com";>Data Pipelines</a>
   - Build and schedule ETL pipelines step-by-step via a simple no-code UI.
 - <a href="http://dianping.com";>Dianping.com</a>
diff --git a/site/powered-by.html b/site/powered-by.html
index b12cf5f..8b93aaa 100644
--- a/site/powered-by.html
+++ b/site/powered-by.html
@@ -321,6 +321,15 @@ to run Spark and ML applications on Amazon Web Services 
and Azure, as well as a
 <a href="https://databricks.com/training";>training program</a>.</li>
     </ul>
   </li>
+  <li><a href="https://www.datamechanics.co";>Data Mechanics</a>
+    <ul>
+      <li>Data Mechanics is a cloud-native Spark platform that can be deployed 
on a Kubernetes cluster
+inside its customers AWS, GCP, or Azure cloud environments.</li>
+      <li>Our focus is to make Spark easy-to-use and cost-effective for data 
engineering workloads.
+We also develop the free, cross-platform, and partially open-source Spark 
monitoring tool 
+<a href="https://www.datamechanics.co/delight";>Data Mechanics Delight.</a></li>
+    </ul>
+  </li>
   <li><a href="https://datapipelines.com";>Data Pipelines</a>
     <ul>
       <li>Build and schedule ETL pipelines step-by-step via a simple no-code 
UI.</li>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to