The types expected by applySchema are documented in the type reference
section:
http://spark.apache.org/docs/latest/sql-programming-guide.html#spark-sql-datatype-reference
I'd certainly accept a PR to improve the docs and add a link to this from
the applySchema section :)
Can you explain why you
Hi Michael,
On Tue, Jan 6, 2015 at 3:43 PM, Michael Armbrust mich...@databricks.com
wrote:
Oh sorry, I'm rereading your email more carefully. Its only because you
have some setup code that you want to amortize?
Yes, exactly that.
Concerning the docs, I'd be happy to contribute, but I don't
Oh sorry, I'm rereading your email more carefully. Its only because you
have some setup code that you want to amortize?
On Mon, Jan 5, 2015 at 10:40 PM, Michael Armbrust mich...@databricks.com
wrote:
The types expected by applySchema are documented in the type reference
section:
Hi,
I have a SchemaRDD where I want to add a column with a value that is
computed from the rest of the row. As the computation involves a
network operation and requires setup code, I can't use
SELECT *, myUDF(*) FROM rdd,
but I wanted to use a combination of:
- get schema of input SchemaRDD