Update:

This syntax is mainly for avoiding retyping column names.

Let's take the example in my previous post, where *a* is a table of 15
columns, *b* has 5 columns, after a join, I have a table of (15 + 5 - 1(key
in b)) = 19 columns and register the table in sqlContext.

I don't want to actually retype all the 19 columns' name when querying with
select. This feature exists in hive.
But in SparkSql, it gives an exception.

Any ideas ? Thx

Hao



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-select-syntax-tp16299p16364.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to