l.com>
> *Date:* 2017/2/7 01:26:59
> *To:* "user"<user@spark.apache.org>;
> *Subject:* Re: Spark 2 - Creating datasets from dataframes with extra
> columns
>
> This seems like a bug to me, the schemas should match.
>
> scala> import org.apache.spar
i checked it, it seems is a bug. do you create a jira now plesae?
---Original---
From: "Don Drake"<dondr...@gmail.com>
Date: 2017/2/7 01:26:59
To: "user"<user@spark.apache.org>;
Subject: Re: Spark 2 - Creating datasets from dataframes with extra columns
This see
This seems like a bug to me, the schemas should match.
scala> import org.apache.spark.sql.Encoders
import org.apache.spark.sql.Encoders
scala> val fEncoder = Encoders.product[F]
fEncoder: org.apache.spark.sql.Encoder[F] = class[f1[0]: string, f2[0]:
string, f3[0]: string]
scala> fEncoder.schema
In 1.6, when you created a Dataset from a Dataframe that had extra columns,
the columns not in the case class were dropped from the Dataset.
For example in 1.6, the column c4 is gone:
scala> case class F(f1: String, f2: String, f3:String)
defined class F
scala> import sqlContext.implicits._