Looks like the import comes from
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala :

      processLine("import sqlContext.sql")

On Mon, Apr 4, 2016 at 5:16 PM, Jacek Laskowski <ja...@japila.pl> wrote:

> Hi Spark devs,
>
> I'm unsure if what I'm seeing is correct. I'd appreciate any input
> to...rest my nerves :-) I did `import org.apache.spark._` by mistake,
> but since it's valid, I'm wondering why does Spark shell imports sql
> at all since it's available after the import?!
>
> (it's today's build)
>
> scala> sql("SELECT * FROM dafa").show(false)
> <console>:30: error: reference to sql is ambiguous;
> it is imported twice in the same scope by
> import org.apache.spark._
> and import sqlContext.sql
>        sql("SELECT * FROM dafa").show(false)
>        ^
>
> scala> :imports
>  1) import sqlContext.implicits._  (52 terms, 31 are implicit)
>  2) import sqlContext.sql          (1 terms)
>
> scala> sc.version
> res19: String = 2.0.0-SNAPSHOT
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to