Nicholas Chammas created SPARK-19553:
----------------------------------------

             Summary: Add GroupedData.countApprox()
                 Key: SPARK-19553
                 URL: https://issues.apache.org/jira/browse/SPARK-19553
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.1.0
            Reporter: Nicholas Chammas
            Priority: Minor


We already have a 
[{{pyspark.sql.functions.approx_count_distinct()}}|http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.approx_count_distinct]
 that can be applied to grouped data, but it seems odd that you can't just get 
regular approximate count for grouped data.

I imagine the API would mirror that for 
[{{RDD.countApprox()}}|http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.countApprox],
 but I'm not sure:

{code}
(df
    .groupBy('col1')
    .countApprox(timeout=300, confidence=0.95)
    .show())
{code}

Or, if we want to mirror the {{approx_count_distinct()}} function, we can do 
that too. I'd want to understand why that function doesn't take a timeout or 
confidence parameter, though. Also, what does {{rsd}} mean? It's not documented.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to