On Thu, Feb 18, 2016 at 11:18:44PM +, Kabeer Ahmed wrote:
> I use Spark 1.5 with CDH5.5 distribution and I see that support is
> present for UDAF. From the link:
> https://databricks.com/blog/2015/09/16/spark-1-5-dataframe-api-highlights-datetimestring-handling-time-intervals-and-udafs.html,
>
I'm working on an application using DataFrames (Scala API) in Spark 1.5.0,
and we need to define and use several custom aggregators. I'm having
trouble figuring out how to do this, however.
First, which version of Spark did UDAF support land in? Has it in fact
landed at all?
https://issues.apac
I'm working with Spark 1.5.0, and I'm using the Scala API to construct
DataFrames and perform operations on them. My application requires that I
synthesize column names for intermediate results under some circumstances,
and I don't know what the rules are for legal column names. In particular,
I'