In many cases we use more efficient mutable implementations internally
(i.e. mutable undecoded utf8 instead of java.lang.String, or a BigDecimal
implementation that uses a Long when the number is small enough).

On Thu, Jun 25, 2015 at 1:56 PM, Koert Kuipers <ko...@tresata.com> wrote:

> i noticed in DataFrame that to get the rdd out of it some conversions are
> done:
>   val converter = CatalystTypeConverters.createToScalaConverter(schema)
>   rows.map(converter(_).asInstanceOf[Row])
>
> does this mean DataFrame internally does not use the standard scala types?
> why not?
>

Reply via email to