[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-21 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-4959:

Fix Version/s: 1.2.1

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Assignee: Cheng Hao
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0, 1.2.1


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-4959:

Labels: backport-needed  (was: )

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Priority: Critical
  Labels: backport-needed

 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Priority: Blocker  (was: Critical)

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Priority: Blocker
  Labels: backport-needed

 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Assignee: Cheng Hao

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Assignee: Cheng Hao
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Fix Version/s: 1.3.0

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Fix Version/s: (was: 1.2.1)

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Assignee: Cheng Hao
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Fix Version/s: 1.2.1

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Assignee: Cheng Hao
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0, 1.2.1


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2015-01-20 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-4959:
---
Target Version/s: 1.3.0, 1.2.1  (was: 1.3.0)

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Priority: Blocker
  Labels: backport-needed
 Fix For: 1.3.0


 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2014-12-29 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-4959:

Priority: Critical  (was: Major)
Target Version/s: 1.3.0

 Attributes are case sensitive when using a select query from a projection
 -

 Key: SPARK-4959
 URL: https://issues.apache.org/jira/browse/SPARK-4959
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.0
Reporter: Andy Konwinski
Priority: Critical

 Per [~marmbrus], see this line of code, where we should be using an attribute 
 map
  
 https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147
 To reproduce, i ran the following in the Spark shell:
 {code}
 import sqlContext._
 sql(drop table if exists test)
 sql(create table test (col1 string))
 sql(insert into table test select hi from prejoined limit 1)
 val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
 col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
 sqlContext.table(test).select(projection:_*).registerTempTable(test2)
 # This succeeds.
 sql(select CaseSensitiveColName from test2).first()
 # This fails with java.util.NoSuchElementException: key not found: 
 casesensitivecolname#23046
 sql(select casesensitivecolname from test2).first()
 {code}
 The full stack trace printed for the final command that is failing: 
 {code}
 java.util.NoSuchElementException: key not found: casesensitivecolname#23046
   at scala.collection.MapLike$class.default(MapLike.scala:228)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
   at scala.collection.MapLike$class.apply(MapLike.scala:141)
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at 
 org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
   at 
 org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
   at 
 org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
   at 
 org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
   at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
   at 
 org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
   at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
   at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
   at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2014-12-24 Thread Andy Konwinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Konwinski updated SPARK-4959:
--
Description: 
Per [~marmbrus], see this line of code, where we should be using an attribute 
map
 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147

To reproduce, i ran the following in the Spark shell:

{code}
sql(drop table if exists test)
sql(create table test (col1 string))
sql(insert into table test select hi from prejoined limit 1)
import sqlContext._
val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
sqlContext.table(test).select(projection:_*).registerTempTable(test2)

# This succeeds.
sql(select CaseSensitiveColName from test2).first()

# This fails with java.util.NoSuchElementException: key not found: 
casesensitivecolname#23046
sql(select casesensitivecolname from test2).first()
{code}

The full stack trace printed for the final command that is failing: 
{code}
java.util.NoSuchElementException: key not found: casesensitivecolname#23046
at scala.collection.MapLike$class.default(MapLike.scala:228)
at 
org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at 
org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
at 
org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
at 
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
at 
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
at 
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
at 
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
at 
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
{code}

  was:
To reproduce, i ran the following in the Spark shell:

{code}
sql(drop table if exists test)
sql(create table test (col1 string))
sql(insert into table test select hi from prejoined limit 1)
import sqlContext._
val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
sqlContext.table(test).select(projection:_*).registerTempTable(test2)

# This succeeds.
sql(select CaseSensitiveColName from test2).first()

# This fails with java.util.NoSuchElementException: key not found: 
casesensitivecolname#23046
sql(select casesensitivecolname from test2).first()
{code}

The full stack trace printed for the final command that is failing: 
{code}
java.util.NoSuchElementException: key not found: casesensitivecolname#23046
at 

[jira] [Updated] (SPARK-4959) Attributes are case sensitive when using a select query from a projection

2014-12-24 Thread Andy Konwinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Konwinski updated SPARK-4959:
--
Description: 
Per [~marmbrus], see this line of code, where we should be using an attribute 
map
 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147

To reproduce, i ran the following in the Spark shell:

{code}
import sqlContext._
sql(drop table if exists test)
sql(create table test (col1 string))
sql(insert into table test select hi from prejoined limit 1)
val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
sqlContext.table(test).select(projection:_*).registerTempTable(test2)

# This succeeds.
sql(select CaseSensitiveColName from test2).first()

# This fails with java.util.NoSuchElementException: key not found: 
casesensitivecolname#23046
sql(select casesensitivecolname from test2).first()
{code}

The full stack trace printed for the final command that is failing: 
{code}
java.util.NoSuchElementException: key not found: casesensitivecolname#23046
at scala.collection.MapLike$class.default(MapLike.scala:228)
at 
org.apache.spark.sql.catalyst.expressions.AttributeMap.default(AttributeMap.scala:29)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at 
org.apache.spark.sql.catalyst.expressions.AttributeMap.apply(AttributeMap.scala:29)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
org.apache.spark.sql.hive.execution.HiveTableScan.init(HiveTableScan.scala:57)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:221)
at 
org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:378)
at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:217)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
at 
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:285)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
at 
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
at 
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
at 
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
at 
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:108)
at org.apache.spark.rdd.RDD.first(RDD.scala:1093)
{code}

  was:
Per [~marmbrus], see this line of code, where we should be using an attribute 
map
 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala#L147

To reproduce, i ran the following in the Spark shell:

{code}
sql(drop table if exists test)
sql(create table test (col1 string))
sql(insert into table test select hi from prejoined limit 1)
import sqlContext._
val projection = col1.attr.as(Symbol(CaseSensitiveColName)) :: 
col1.attr.as(Symbol(CaseSensitiveColName2)) :: Nil
sqlContext.table(test).select(projection:_*).registerTempTable(test2)

# This succeeds.
sql(select CaseSensitiveColName from test2).first()

# This fails with java.util.NoSuchElementException: key not found: 
casesensitivecolname#23046
sql(select