[jira] [Created] (SPARK-33420) BroadCastJoin failure when keys on join side has cast from DateTyte to String

2020-11-11 Thread qinyu (Jira)
qinyu created SPARK-33420:
-

 Summary: BroadCastJoin failure when keys on join side has cast 
from DateTyte to String
 Key: SPARK-33420
 URL: https://issues.apache.org/jira/browse/SPARK-33420
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.0.1
 Environment: spark 3.0.1 hadoop 2.9.2 
Reporter: qinyu


when use spark as below : 

spark.sql(
 """ create table table1(a1 INT , a2 STRING)
 | using parquet
 |
 |""".stripMargin).show()

spark.sql(
 """ create table table2(b1 INT , b2 STRING)
 | using parquet
 |
 |""".stripMargin).show()

spark.sql(
 """ select /*+ BROADCAST(a) */ * from table1 a join table2 b
 | on cast(to_date(a.a2) as string) = b.b2
 |
 |""".stripMargin).show()

Exception following will be thrown : 

 java.util.NoSuchElementException: None.getjava.util.NoSuchElementException: 
None.get at scala.None$.get(Option.scala:529) at 
scala.None$.get(Option.scala:527) at 
org.apache.spark.sql.catalyst.expressions.TimeZoneAwareExpression.zoneId(datetimeExpressions.scala:56)
 at 
org.apache.spark.sql.catalyst.expressions.TimeZoneAwareExpression.zoneId$(datetimeExpressions.scala:56)
 at 
org.apache.spark.sql.catalyst.expressions.CastBase.zoneId$lzycompute(Cast.scala:253)
 at org.apache.spark.sql.catalyst.expressions.CastBase.zoneId(Cast.scala:253) 
at 
org.apache.spark.sql.catalyst.expressions.CastBase.dateFormatter$lzycompute(Cast.scala:287)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-33192) on k8s dynamic resource allocation failed

2020-10-20 Thread qinyu (Jira)
qinyu created SPARK-33192:
-

 Summary: on k8s dynamic resource allocation failed
 Key: SPARK-33192
 URL: https://issues.apache.org/jira/browse/SPARK-33192
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.0.1
Reporter: qinyu


when spark running with following parameters 
:spark.dynamicAllocation.enabled=true,spark.shuffle.service.enabled=false,spark.dynamicAllocation.shuffleTracking.enabled=true,spark.dynamicAllocation.minExecutors=0,spark.dynamicAllocation.maxExecutor=3,spark.dynamicAllocation.executorIdleTimeout=60,
it means using dynamic resource allocation without shuffle service. if  
submitting a job with shuffle operation, then executor will not be deleted 
forever even if the timeout is arrived.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org