[ 
https://issues.apache.org/jira/browse/SPARK-35631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat closed SPARK-35631.
-------------------------------

 this is not a issue. Closed without handling.

> java.lang.ArithmeticException: integer overflow when SELECT 2147483647 + 1 
> executed with set spark.sql.ansi.enabled=true
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-35631
>                 URL: https://issues.apache.org/jira/browse/SPARK-35631
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.1.1
>         Environment: Spark 3.1.1
>            Reporter: Chetan Bhat
>            Priority: Minor
>
> From Spark beeline the queries are executed 
> set spark.sql.ansi.enabled=true
> SELECT 2147483647 + 1
>  
> Issue :  The select query fails with java.lang.ArithmeticException: integer 
> overflow
> 0: jdbc:hive2://10.20.253.239:23040/default> set spark.sql.ansi.enabled=true;
> +-------------------------+--------+
> | key | value |
> +-------------------------+--------+
> | spark.sql.ansi.enabled | true |
> +-------------------------+--------+
> 1 row selected (0.052 seconds)
> 0: jdbc:hive2://10.20.253.239:23040/default> SELECT 2147483647 + 1;
> Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> java.lang.ArithmeticException: integer overflow
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:361)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3$$Lambda$1762/209207680.apply$mcV$sp(Unknown
>  Source)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArithmeticException: integer overflow
>  at java.lang.Math.addExact(Math.java:790)
>  at org.apache.spark.sql.types.IntegerExactNumeric$.plus(numerics.scala:95)
>  at org.apache.spark.sql.types.IntegerExactNumeric$.plus(numerics.scala:94)
>  at 
> org.apache.spark.sql.catalyst.expressions.Add.nullSafeEval(arithmetic.scala:264)
>  at 
> org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:567)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:66)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:54)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:317)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1613/79619382.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:317)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:322)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1615/1159662764.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:407)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1601/550689618.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:243)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:405)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:358)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:322)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan$$Lambda$1315/1031179320.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan$$Lambda$1788/344990949.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan$$Lambda$1317/222329442.apply(Unknown
>  Source)
>  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
>  at scala.collection.TraversableLike$$Lambda$26/999522307.apply(Unknown 
> Source)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at scala.collection.TraversableLike.map(TraversableLike.scala:238)
>  at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
>  at scala.collection.immutable.List.map(List.scala:298)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:132)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan$$Lambda$1316/732259142.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:243)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137)
>  at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:54)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:53)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:317)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1613/79619382.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:317)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:171)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:169)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
>  at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:53)
>  at 
> org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:44)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:216)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$1311/1478530425.apply(Unknown
>  Source)
>  at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
>  at 
> scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
>  at scala.collection.immutable.List.foldLeft(List.scala:89)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:213)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:205)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$1310/802758091.apply(Unknown
>  Source)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:205)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:183)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$1303/1329861157.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:183)
>  at 
> org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:87)
>  at 
> org.apache.spark.sql.execution.QueryExecution$$Lambda$1636/2090796568.apply(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
>  at 
> org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
>  at



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to