Do people usually important o.a.spark.rdd._ ?
Also in order to maintain source and binary compatibility, we would need to
keep both right?
On Thu, Nov 6, 2014 at 3:12 AM, Shixiong Zhu zsxw...@gmail.com wrote:
I saw many people asked how to convert a RDD to a PairRDDFunctions. I would
like to
If we put the `implicit` into pacakge object rdd or object rdd, when we
write `rdd.groupbykey()`, because rdd is an object of RDD, Scala compiler
will search `object rdd`(companion object) and `package object rdd`(pacakge
object) by default. We don't need to import them explicitly. Here is a post
That seems like a great idea. Can you submit a pull request?
On Thu, Nov 13, 2014 at 7:13 PM, Shixiong Zhu zsxw...@gmail.com wrote:
If we put the `implicit` into pacakge object rdd or object rdd, when
we write `rdd.groupbykey()`, because rdd is an object of RDD, Scala
compiler will search
OK. I'll take it.
Best Regards,
Shixiong Zhu
2014-11-14 12:34 GMT+08:00 Reynold Xin r...@databricks.com:
That seems like a great idea. Can you submit a pull request?
On Thu, Nov 13, 2014 at 7:13 PM, Shixiong Zhu zsxw...@gmail.com wrote:
If we put the `implicit` into pacakge object rdd or
I saw many people asked how to convert a RDD to a PairRDDFunctions. I would
like to ask a question about it. Why not put the following implicit into
pacakge object rdd or object rdd?
implicit def rddToPairRDDFunctions[K, V](rdd: RDD[(K, V)])
(implicit kt: ClassTag[K], vt: ClassTag[V],