Was the feature of displaying accumulators in the Spark UI implemented in Spark
1.4.1, or was that added later?
Thanks,
Daniel
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail:
In Spark 1.6.1, how can I convert a DataFrame to a Dataset[Row]?
Is there a direct conversion? (Trying .as[Row] doesn't work,
even after importing .implicits._ .)
Is there some way to map the Rows from the Dataframe into the Dataset[Row]?
(DataFrame.map would just make another Dataframe,
Koert,
Koert Kuipers wrote:
A single json object would mean for most parsers it needs to fit in memory when
reading or writing
Note that codlife didn't seem to being asking about /single-object/ JSON files,
but about /standard-format/ JSON files.
On Oct 15, 2016 11:09, "codlife"
In any case, it seems that the current behavior is not documented sufficiently.
Koert Kuipers wrote:
i can see how unquoted csv would work if you escape delimiters, but i have
never seen that in practice.
On Thu, Oct 27, 2016 at 2:03 PM, Jain, Nishit