Re: Thoughts on release cadence?

2017-07-30 Thread Reynold Xin
This is reasonable ... +1 On Sun, Jul 30, 2017 at 2:19 AM, Sean Owen wrote: > The project had traditionally posted some guidance about upcoming > releases. The last release cycle was about 6 months. What about penciling > in December 2017 for 2.3.0?

Failing to write a data-frame containing a UDT to parquet format

2017-07-30 Thread Erik Erlandson
I'm trying to support parquet i/o for data-frames that contain a UDT (for t-digests). The UDT is defined here: https://github.com/erikerlandson/isarn-sketches-spark/blob/feature/pyspark/src/main/scala/org/apache/spark/isarnproject/sketches/udt/TDigestUDT.scala#L37 I can read and write using

Re: Thoughts on release cadence?

2017-07-30 Thread Dongjoon Hyun
+1 Bests, Dongjoon On Sun, Jul 30, 2017 at 02:20 Sean Owen wrote: > The project had traditionally posted some guidance about upcoming > releases. The last release cycle was about 6 months. What about penciling > in December 2017 for 2.3.0?

Thoughts on release cadence?

2017-07-30 Thread Sean Owen
The project had traditionally posted some guidance about upcoming releases. The last release cycle was about 6 months. What about penciling in December 2017 for 2.3.0? http://spark.apache.org/versioning-policy.html