maropu commented on a change in pull request #28120: [SPARK-31349][SQL][DOCS] 
Document built-in aggregate functions in SQL Reference
URL: https://github.com/apache/spark/pull/28120#discussion_r404531435
 
 

 ##########
 File path: docs/sql-ref-functions-builtin-aggregate.md
 ##########
 @@ -19,4 +19,616 @@ license: |
   limitations under the License.
 ---
 
-Aggregate functions
\ No newline at end of file
+Spark SQL provides build-in aggregate functions defined in the dataset API and 
SQL interface. Aggregate functions
+operate on a group of rows and return a single value.
+
+Spark SQL aggregate functions are grouped as <code>agg_funcs</code> in Spark 
SQL. Below is the list of functions.
+
+**Note:** All functions below have another signature which takes String as a 
expression.
+
+<table class="table">
+  <thead>
+    <tr><th style="width:25%">Function</th><th>Parameter 
Type(s)</th><th>Description</th></tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td><b>{any | some | bool_or}</b>(<i>expression</i>)</td>
+      <td>boolean</td>
+      <td>Returns true if at least one value is true.</td>
+    </tr>
+    <tr>
+      <td><b>approx_count_distinct</b>(<i>expression[, relativeSD]</i>)</td>
+      <td>(long, double)</td>
+      <td>RelativeSD is the maximum estimation error allowed. Returns the 
estimated cardinality by HyperLogLog++.</td>
 
 Review comment:
   nit: better to wrap `RelativeSD` with \`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to