[jira] [Created] (SPARK-30618) Why does SparkSQL allow `WHERE` to be table alias?

2020-01-23 Thread Chunjun Xiao (Jira)
Chunjun Xiao created SPARK-30618:


 Summary: Why does SparkSQL allow `WHERE` to be table alias?
 Key: SPARK-30618
 URL: https://issues.apache.org/jira/browse/SPARK-30618
 Project: Spark
  Issue Type: Question
  Components: SQL
Affects Versions: 2.4.4
Reporter: Chunjun Xiao


An empty `WHERE` expression is valid in Spark SQL, as: `SELECT * FROM XXX 
WHERE`. Here `WHERE` is parsed as the table alias.

I think this surprises most SQL users, as this is an invalid statement in some 
SQL engines like MySQL.  

I checked the source code, and found more keywords (in most SQL system) are 
treated as `noReserved` and allowed to be table alias.  

Could anyone please give the rationality behind this decision?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3474) Rename the env variable SPARK_MASTER_IP to SPARK_MASTER_HOST

2014-09-10 Thread Chunjun Xiao (JIRA)
Chunjun Xiao created SPARK-3474:
---

 Summary: Rename the env variable SPARK_MASTER_IP to 
SPARK_MASTER_HOST
 Key: SPARK-3474
 URL: https://issues.apache.org/jira/browse/SPARK-3474
 Project: Spark
  Issue Type: Bug
  Components: Deploy
Affects Versions: 1.0.1
Reporter: Chunjun Xiao


There's some inconsistency regarding the env variable used to specify the spark 
master host server.

In spark source code (MasterArguments.scala), the env variable is 
SPARK_MASTER_HOST, while in the shell script (e.g., spark-env.sh, 
start-master.sh), it's named SPARK_MASTER_IP.

This will introduce an issue in some case, e.g., if spark master is started  
via service spark-master start, which is built based on latest bigtop (refer 
to bigtop/spark-master.svc).
In this case, SPARK_MASTER_IP will have no effect.
I suggest we change SPARK_MASTER_IP in the shell script to SPARK_MASTER_HOST.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-2706) Enable Spark to support Hive 0.13

2014-07-27 Thread Chunjun Xiao (JIRA)
Chunjun Xiao created SPARK-2706:
---

 Summary: Enable Spark to support Hive 0.13
 Key: SPARK-2706
 URL: https://issues.apache.org/jira/browse/SPARK-2706
 Project: Spark
  Issue Type: Dependency upgrade
  Components: SQL
Affects Versions: 1.0.1
Reporter: Chunjun Xiao


It seems Spark cannot work with Hive 0.13 well.
When I compiled Spark with Hive 0.13.1, I got some error messages, as attached 
below.
So, when can Spark be enabled to support Hive 0.13?

Compiling Error:
{quote}
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala:180:
 type mismatch;
 found   : String
 required: Array[String]
[ERROR]   val proc: CommandProcessor = 
CommandProcessorFactory.get(tokens(0), hiveconf)
[ERROR]  ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala:264:
 overloaded method constructor TableDesc with alternatives:
  (x$1: Class[_ : org.apache.hadoop.mapred.InputFormat[_, _]],x$2: 
Class[_],x$3: java.util.Properties)org.apache.hadoop.hive.ql.plan.TableDesc 
and
  ()org.apache.hadoop.hive.ql.plan.TableDesc
 cannot be applied to (Class[org.apache.hadoop.hive.serde2.Deserializer], 
Class[(some other)?0(in value tableDesc)(in value tableDesc)], Class[?0(in 
value tableDesc)(in value tableDesc)], java.util.Properties)
[ERROR]   val tableDesc = new TableDesc(
[ERROR]   ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:140:
 value getPartitionPath is not a member of 
org.apache.hadoop.hive.ql.metadata.Partition
[ERROR]   val partPath = partition.getPartitionPath
[ERROR]^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala:132:
 value appendReadColumnNames is not a member of object 
org.apache.hadoop.hive.serde2.ColumnProjectionUtils
[ERROR] ColumnProjectionUtils.appendReadColumnNames(hiveConf, 
attributes.map(_.name))
[ERROR]   ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:79:
 org.apache.hadoop.hive.common.type.HiveDecimal does not have a constructor
[ERROR]   new HiveDecimal(bd.underlying())
[ERROR]   ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:132:
 type mismatch;
 found   : org.apache.hadoop.fs.Path
 required: String
[ERROR]   
SparkHiveHadoopWriter.createPathFromString(fileSinkConf.getDirName, conf))
[ERROR]   ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:179:
 value getExternalTmpFileURI is not a member of 
org.apache.hadoop.hive.ql.Context
[ERROR] val tmpLocation = hiveContext.getExternalTmpFileURI(tableLocation)
[ERROR]   ^
[ERROR] 
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala:209: 
org.apache.hadoop.hive.common.type.HiveDecimal does not have a constructor
[ERROR]   case bd: BigDecimal = new HiveDecimal(bd.underlying())
[ERROR]  ^
[ERROR] 8 errors found
[DEBUG] Compilation failed (CompilerInterface)
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Spark Project Parent POM .. SUCCESS [2.579s]
[INFO] Spark Project Core  SUCCESS [2:39.805s]
[INFO] Spark Project Bagel ... SUCCESS [21.148s]
[INFO] Spark Project GraphX .. SUCCESS [59.950s]
[INFO] Spark Project ML Library .. SUCCESS [1:08.771s]
[INFO] Spark Project Streaming ... SUCCESS [1:17.759s]
[INFO] Spark Project Tools ... SUCCESS [15.405s]
[INFO] Spark Project Catalyst  SUCCESS [1:17.405s]
[INFO] Spark Project SQL . SUCCESS [1:11.094s]
[INFO] Spark Project Hive  FAILURE [11.121s]
[INFO] Spark Project REPL  SKIPPED
[INFO] Spark Project YARN Parent POM . SKIPPED
[INFO] Spark Project YARN Stable API . SKIPPED
[INFO] Spark Project Assembly  SKIPPED
[INFO] Spark Project External Twitter  SKIPPED
[INFO] Spark Project External Kafka .. SKIPPED
[INFO] Spark Project External Flume .. SKIPPED
[INFO] Spark Project External ZeroMQ . SKIPPED
[INFO] Spark Project External MQTT ... SKIPPED
[INFO] Spark Project Examples  SKIPPED