[jira] [Created] (SPARK-35877) Spark Protobuf jar has CVE issue CVE-2015-5237

2021-06-24 Thread jobit mathew (Jira)
jobit mathew created SPARK-35877:


 Summary: Spark Protobuf jar has CVE issue CVE-2015-5237
 Key: SPARK-35877
 URL: https://issues.apache.org/jira/browse/SPARK-35877
 Project: Spark
  Issue Type: Bug
  Components: Security, Spark Core
Affects Versions: 3.1.1, 2.4.5
Reporter: jobit mathew


Spark Protobuf jar has CVE issue CVE-2015-5237



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Description: 
Spark UI-Executor tab is empty in IE11

Spark UI-Stages DAG visualization is empty in IE11

other tabs looks Ok.

Spark job history shows completed and incomplete applications list .But when we 
go inside each application same issue may be there.

Attaching some screenshots

  was:
Spark UI-Executor tab is empty in IE11

Spark UI-Stages DAG visualization is empty in IE11

other tabs looks Ok

Attaching some screenshots


> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>
> Spark UI-Executor tab is empty in IE11
> Spark UI-Stages DAG visualization is empty in IE11
> other tabs looks Ok.
> Spark job history shows completed and incomplete applications list .But when 
> we go inside each application same issue may be there.
> Attaching some screenshots



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17366531#comment-17366531
 ] 

jobit mathew commented on SPARK-35821:
--

[~hyukjin.kwon] I attached some screen shots .Could you please have a look

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>
> Spark UI-Executor tab is empty in IE11
> Spark UI-Stages DAG visualization is empty in IE11
> other tabs looks Ok
> Attaching some screenshots



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Description: 
Spark UI-Executor tab is empty in IE11

Spark UI-Stages DAG visualization is empty in IE11

other tabs looks Ok

Attaching some screenshots

  was:
Spark UI-Executor tab is empty in IE11

Spark UI-Stages DAG visualization is empty in IE11

other tabs looks Ok

Attaching some scrreshots


> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>
> Spark UI-Executor tab is empty in IE11
> Spark UI-Stages DAG visualization is empty in IE11
> other tabs looks Ok
> Attaching some screenshots



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Description: 
Spark UI-Executor tab is empty in IE11

Spark UI-Stages DAG visualization is empty in IE11

other tabs looks Ok

Attaching some scrreshots

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>
> Spark UI-Executor tab is empty in IE11
> Spark UI-Stages DAG visualization is empty in IE11
> other tabs looks Ok
> Attaching some scrreshots



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Attachment: Executortab_Chrome.png

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Attachment: Executortab_IE.PNG

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Attachment: dag_IE.PNG

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: Executortab_Chrome.png, Executortab_IE.PNG, dag_IE.PNG, 
> dag_chrome.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-21 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Attachment: dag_chrome.png

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
> Attachments: dag_IE.PNG, dag_chrome.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35822) Spark UI-Executor tab is empty in IE11

2021-06-19 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35822:
-
Description: In yarn mode issue there

> Spark UI-Executor tab is empty in IE11
> --
>
> Key: SPARK-35822
> URL: https://issues.apache.org/jira/browse/SPARK-35822
> Project: Spark
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
>
> In yarn mode issue there



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35823) Spark UI-Stages DAG visualization is empty in IE11

2021-06-19 Thread jobit mathew (Jira)
jobit mathew created SPARK-35823:


 Summary: Spark UI-Stages DAG visualization is empty in IE11
 Key: SPARK-35823
 URL: https://issues.apache.org/jira/browse/SPARK-35823
 Project: Spark
  Issue Type: Sub-task
  Components: Web UI
Affects Versions: 3.1.1
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35822) Spark UI-Executor tab is empty in IE11

2021-06-19 Thread jobit mathew (Jira)
jobit mathew created SPARK-35822:


 Summary: Spark UI-Executor tab is empty in IE11
 Key: SPARK-35822
 URL: https://issues.apache.org/jira/browse/SPARK-35822
 Project: Spark
  Issue Type: Sub-task
  Components: Web UI
Affects Versions: 3.1.1
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-19 Thread jobit mathew (Jira)
jobit mathew created SPARK-35821:


 Summary: Spark 3.1.1 Internet Explorer 11 compatibility issues
 Key: SPARK-35821
 URL: https://issues.apache.org/jira/browse/SPARK-35821
 Project: Spark
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 3.1.1
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35821) Spark 3.1.1 Internet Explorer 11 compatibility issues

2021-06-19 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-35821:
-
Priority: Minor  (was: Major)

> Spark 3.1.1 Internet Explorer 11 compatibility issues
> -
>
> Key: SPARK-35821
> URL: https://issues.apache.org/jira/browse/SPARK-35821
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35516) Storage UI tab Storage Level tool tip correction

2021-05-25 Thread jobit mathew (Jira)
jobit mathew created SPARK-35516:


 Summary: Storage UI tab Storage Level tool tip correction
 Key: SPARK-35516
 URL: https://issues.apache.org/jira/browse/SPARK-35516
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 3.1.1
Reporter: jobit mathew


Storage UI tab Storage Level tool tip correction required.
||
| ||
| ||
| |Storage Level|
| ||

please change *andreplication * to *and replication*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34785) SPARK-34212 issue not fixed if spark.sql.parquet.enableVectorizedReader=true which is default value. Error Parquet column cannot be converted in file.

2021-03-18 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303904#comment-17303904
 ] 

jobit mathew commented on SPARK-34785:
--

[~dongjoon] can you check it once.

> SPARK-34212 issue not fixed if spark.sql.parquet.enableVectorizedReader=true 
> which is default value. Error Parquet column cannot be converted in file.
> --
>
> Key: SPARK-34785
> URL: https://issues.apache.org/jira/browse/SPARK-34785
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.1
>Reporter: jobit mathew
>Priority: Major
>
> SPARK-34212 issue not fixed if spark.sql.parquet.enableVectorizedReader=true 
> which is default value.
> IF spark.sql.parquet.enableVectorizedReader=false below scenario pass but it 
> will reduce the performance.
> In Hive, 
> {code:java}
> create table test_decimal(amt decimal(18,2)) stored as parquet; 
> insert into test_decimal select 100;
> alter table test_decimal change amt amt decimal(19,3);
> {code}
> In Spark,
> {code:java}
> select * from test_decimal;
> {code}
> {code:java}
> ++
> |amt |
> ++
> | 100.000 |{code}
> but if spark.sql.parquet.enableVectorizedReader=true below error
> {code:java}
> : jdbc:hive2://10.21.18.161:23040/> select * from test_decimal;
> going to print operations logs
> printed operations logs
> going to print operations logs
> printed operations logs
> Getting log thread is interrupted, since query is done!
> Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 
> (TID 4) (vm2 executor 2): 
> org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot 
> be converted in file 
> hdfs://hacluster/user/hive/warehouse/test_decimal/00_0. Column: [amt], 
> Expected: decimal(19,3), Found: FIXED_LEN_BYTE_ARRAY
> at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
> at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
> at 
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:503)
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
>  Source)
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
> at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
> at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
> at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.run(Task.scala:131)
> at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1461)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException
> at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.constructConvertNotSupportedException(VectorizedColumnReader.java:339)
> at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readFixedLenByteArrayBatch(VectorizedColumnReader.java:735)
> at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:312)
> at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:283)
> at 
> 

[jira] [Created] (SPARK-34785) SPARK-34212 issue not fixed if spark.sql.parquet.enableVectorizedReader=true which is default value. Error Parquet column cannot be converted in file.

2021-03-18 Thread jobit mathew (Jira)
jobit mathew created SPARK-34785:


 Summary: SPARK-34212 issue not fixed if 
spark.sql.parquet.enableVectorizedReader=true which is default value. Error 
Parquet column cannot be converted in file.
 Key: SPARK-34785
 URL: https://issues.apache.org/jira/browse/SPARK-34785
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.1.1
Reporter: jobit mathew


SPARK-34212 issue not fixed if spark.sql.parquet.enableVectorizedReader=true 
which is default value.

IF spark.sql.parquet.enableVectorizedReader=false below scenario pass but it 
will reduce the performance.

In Hive, 
{code:java}
create table test_decimal(amt decimal(18,2)) stored as parquet; 
insert into test_decimal select 100;
alter table test_decimal change amt amt decimal(19,3);
{code}
In Spark,
{code:java}
select * from test_decimal;
{code}
{code:java}
++
|amt |
++
| 100.000 |{code}
but if spark.sql.parquet.enableVectorizedReader=true below error
{code:java}
: jdbc:hive2://10.21.18.161:23040/> select * from test_decimal;
going to print operations logs
printed operations logs
going to print operations logs
printed operations logs
Getting log thread is interrupted, since query is done!
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 
4) (vm2 executor 2): org.apache.spark.sql.execution.QueryExecutionException: 
Parquet column cannot be converted in file 
hdfs://hacluster/user/hive/warehouse/test_decimal/00_0. Column: [amt], 
Expected: decimal(19,3), Found: FIXED_LEN_BYTE_ARRAY
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
at 
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:503)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
 Source)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1461)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.constructConvertNotSupportedException(VectorizedColumnReader.java:339)
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readFixedLenByteArrayBatch(VectorizedColumnReader.java:735)
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:312)
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:283)
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:181)
at 
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
... 20 more

Driver stacktrace:
at 

[jira] [Created] (SPARK-33429) Support drop column in spark also like in postgresql

2020-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-33429:


 Summary: Support drop column in spark also like in postgresql
 Key: SPARK-33429
 URL: https://issues.apache.org/jira/browse/SPARK-33429
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 2.4.5
Reporter: jobit mathew


Support alter table with drop column in spark also like in postgresql.

 

[https://www.postgresql.org/docs/12/sql-altertable.html]

alter table tablename DROP COLUMN/columns [ IF EXISTS ]

columnname

 
{code:java}
spark-sql> drop database if exists hivemetastoretest cascade;
Time taken: 1.067 seconds
spark-sql> create database hivemetastoretest;
Time taken: 0.326 seconds
spark-sql> use hivemetastoretest;
Time taken: 0.053 seconds
spark-sql> create table jobit4 using parquet as select 2.5;
Time taken: 5.058 seconds
spark-sql> alter table jobit4 add columns(name string);
Time taken: 1.194 seconds
spark-sql> alter table jobit4 drop columns(name);
Error in query:
mismatched input 'columns' expecting \{'PARTITION', 'IF'}(line 1, pos 25)

== SQL ==
 alter table jobit4 drop columns(name)
-^^^

spark-sql> alter table jobit4 drop columns name;
Error in query:
mismatched input 'columns' expecting \{'PARTITION', 'IF'}(line 1, pos 25)

== SQL ==
 alter table jobit4 drop columns name
-^^^

spark-sql> [
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32498) Support grant/revoke access privileges like postgresql

2020-07-30 Thread jobit mathew (Jira)
jobit mathew created SPARK-32498:


 Summary: Support grant/revoke access privileges like postgresql
 Key: SPARK-32498
 URL: https://issues.apache.org/jira/browse/SPARK-32498
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.1.0
Reporter: jobit mathew


Support grant/revoke access privileges like postgresql.

[https://www.postgresql.org/docs/9.0/sql-grant.html]

Currently Spark SQL does not support grant/revoke statement, which might be 
done only in Hive



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32481) Support truncate table to move the data to trash

2020-07-30 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-32481:
-
Description: *Instead of deleting the data, move the data to trash.So from 
trash based on configuration data can be deleted permanently.*

> Support truncate table to move the data to trash
> 
>
> Key: SPARK-32481
> URL: https://issues.apache.org/jira/browse/SPARK-32481
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> *Instead of deleting the data, move the data to trash.So from trash based on 
> configuration data can be deleted permanently.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32481) Support truncate table to move the data to trash

2020-07-29 Thread jobit mathew (Jira)
jobit mathew created SPARK-32481:


 Summary: Support truncate table to move the data to trash
 Key: SPARK-32481
 URL: https://issues.apache.org/jira/browse/SPARK-32481
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.1.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32480) Support insert overwrite to move the data to trash

2020-07-29 Thread jobit mathew (Jira)
jobit mathew created SPARK-32480:


 Summary: Support insert overwrite to move the data to trash 
 Key: SPARK-32480
 URL: https://issues.apache.org/jira/browse/SPARK-32480
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL
Affects Versions: 3.1.0
Reporter: jobit mathew


Instead of deleting the data, move the data to trash.So from trash based on 
configuration data can be deleted permanently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32343) CSV predicate pushdown for nested fields

2020-07-16 Thread jobit mathew (Jira)
jobit mathew created SPARK-32343:


 Summary: CSV predicate pushdown for nested fields
 Key: SPARK-32343
 URL: https://issues.apache.org/jira/browse/SPARK-32343
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.1.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32328) Avro predicate pushdown for nested fields

2020-07-15 Thread jobit mathew (Jira)
jobit mathew created SPARK-32328:


 Summary: Avro predicate pushdown for nested fields
 Key: SPARK-32328
 URL: https://issues.apache.org/jira/browse/SPARK-32328
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.1.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32322) Pyspark not launching in Spark IPV6 environment

2020-07-15 Thread jobit mathew (Jira)
jobit mathew created SPARK-32322:


 Summary: Pyspark not launching in Spark IPV6 environment
 Key: SPARK-32322
 URL: https://issues.apache.org/jira/browse/SPARK-32322
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 3.1.0
Reporter: jobit mathew


pyspark  is not launching in Spark IPV6 environment.

Initial analysis looks like python is not supporting IPV6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32103) Spark support IPV6 in yarn mode

2020-06-26 Thread jobit mathew (Jira)
jobit mathew created SPARK-32103:


 Summary: Spark support IPV6 in yarn mode
 Key: SPARK-32103
 URL: https://issues.apache.org/jira/browse/SPARK-32103
 Project: Spark
  Issue Type: Bug
  Components: YARN
Affects Versions: 3.1.0
Reporter: jobit mathew


Spark support IPV6 in yarn mode

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31884) Support MongoDB Kerberos login in JDBC connector

2020-06-04 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17126411#comment-17126411
 ] 

jobit mathew commented on SPARK-31884:
--

[~gsomogyi]

[https://docs.mongodb.com/manual/tutorial/control-access-to-mongodb-with-kerberos-authentication/]

Setting up and configuring a Kerberos deployment is beyond the scope of this 
document. Please refer to the [MIT Kerberos 
documentation|https://web.mit.edu/kerberos/krb5-latest/doc/] or your operating 
system documentation for information on how to configure a Kerberos deployment.

In order to use MongoDB with Kerberos, a [Kerberos service 
principal|https://docs.mongodb.com/manual/core/kerberos/#kerberos-service-principal]
 for each 
[{{mongod}}|https://docs.mongodb.com/manual/reference/program/mongod/#bin.mongod]
 and 
[{{mongos}}|https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos]
 instance in your MongoDB deployment must be [added to the Kerberos 
database|https://web.mit.edu/kerberos/krb5-latest/doc/admin/database.html#add-principal].
 You can add the service principal by running a command similar to the 
following on your KDC:
copy
copied
kadmin.local addprinc mongodb/m1.example@example.com
On each system running 
[{{mongod}}|https://docs.mongodb.com/manual/reference/program/mongod/#bin.mongod]
 or 
[{{mongos}}|https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos],
 a [keytab file|https://docs.mongodb.com/manual/core/kerberos/#keytab-files] 
must be 
[created|https://web.mit.edu/kerberos/krb5-latest/doc/admin/appl_servers.html#keytabs]
 for the respective service principal. You can create the keytab file by 
running a command similar to the following on the system running 
[{{mongod}}|https://docs.mongodb.com/manual/reference/program/mongod/#bin.mongod]
 or 
[{{mongos}}|https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos]:
copy
copied
kadmin.local ktadd mongodb/m1.example@example.com

> Support MongoDB Kerberos login in JDBC connector
> 
>
> Key: SPARK-31884
> URL: https://issues.apache.org/jira/browse/SPARK-31884
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31884) Support MongoDB Kerberos login in JDBC connector

2020-06-01 Thread jobit mathew (Jira)
jobit mathew created SPARK-31884:


 Summary: Support MongoDB Kerberos login in JDBC connector
 Key: SPARK-31884
 URL: https://issues.apache.org/jira/browse/SPARK-31884
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Affects Versions: 3.1.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31857) Support Azure SQLDB Kerberos login in JDBC connector

2020-05-28 Thread jobit mathew (Jira)
jobit mathew created SPARK-31857:


 Summary: Support Azure SQLDB Kerberos login in JDBC connector
 Key: SPARK-31857
 URL: https://issues.apache.org/jira/browse/SPARK-31857
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.1.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31814) Null in Date conversion from yyMMddHHmmss for specific date and time

2020-05-28 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118783#comment-17118783
 ] 

jobit mathew commented on SPARK-31814:
--

[~sunnyjain1] can you please check in latest version

I checked in spark 2.4.5 and this is the result;

scala> sql("select to_date('19033100','yyMMddHHmmss')").show(false);
+---+
|to_date('19033100', 'yyMMddHHmmss')|
+---+
|2019-03-31 |
+---+


scala> sql("select to_date('19033102','yyMMddHHmmss')").show(false);
+---+
|to_date('19033102', 'yyMMddHHmmss')|
+---+
|2019-03-31 |
+---+


scala>

> Null in Date conversion from yyMMddHHmmss for specific date and time
> 
>
> Key: SPARK-31814
> URL: https://issues.apache.org/jira/browse/SPARK-31814
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Spark Version : 2.3.0.2.6.5.0-292
> Distribution : Hortonworks
>Reporter: Sunny Jain
>Priority: Minor
>
> Hi,
>  
> We are trying to convert a column with string datatype to date type using 
> below example. It seems to work for all timestamps except for the timestamp 
> for 31st March 2019 02:**:** like  19033102. Can you please look into it. 
> Thanks.
> scala> sql("select to_date('19033100','yyMMddHHmmss')").show(false)
> ++
> |to_date('19033100', 'yyMMddHHmmss')|
> ++
> |2019-03-31                                                     |
> ++
>  
>  
>  Interstingly below is not working for highlighted hours (02). 
> scala> sql("select to_date('19033102','yyMMddHHmmss')").show(false)
> ++
> |to_date('190331{color:#ff}02{color}', 'yyMMddHHmmss')|
> ++
> |null                                                                   |
> ++
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31832) Add tool tip for Structured streaming page tables

2020-05-26 Thread jobit mathew (Jira)
jobit mathew created SPARK-31832:


 Summary: Add tool tip for Structured streaming page tables
 Key: SPARK-31832
 URL: https://issues.apache.org/jira/browse/SPARK-31832
 Project: Spark
  Issue Type: Sub-task
  Components: SQL, Web UI
Affects Versions: 3.1.0
Reporter: jobit mathew


Better to add tool tip for Structured streaming page tables



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31774) getting the Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, at org.apache.spark.sql.catalyst.errors.package$.attachTree(p

2020-05-21 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113184#comment-17113184
 ] 

jobit mathew commented on SPARK-31774:
--

[~pankaj24] did the issue only in spark 2.2? May be you can try in latest spark 
2.4.5 or 3.0 preview

> getting the Caused by: 
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding 
> attribute, at 
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
> ---
>
> Key: SPARK-31774
> URL: https://issues.apache.org/jira/browse/SPARK-31774
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.2.0
> Environment: spark 2.2
>Reporter: Pankaj Tiwari
>Priority: Major
>
> Actually I am loading the excel which has some 90 columns and the some 
> columns name contains special character as well like @ % -> . etc etc so 
> while I am doing one use case like :
> sourceDataSet.select(columnSeq).except(targetDataset.select(columnSeq)));
> this is working fine but as soon as I am running 
> sourceDataSet.select(columnSeq).except(targetDataset.select(columnSeq)).count()
> it is failing with error like :
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
> Exchange SinglePartition
> +- *HashAggregate(keys=[], functions=[partial_count(1)], 
> output=[count#26596L])
>    +- *HashAggregate(keys=columns name 
>  
>  
> Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: 
> Binding attribute, tree:column namet#14050
>         at 
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
>         at 
> org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
>         at 
> org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
>         at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
>         at 
> org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87)
>         at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$40.apply(HashAggregateExec.scala:703)
>         at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$40.apply(HashAggregateExec.scala:703)
>         at 
> scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at 
> scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
>         at 
> scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at 
> scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
>         at scala.collection.immutable.Stream.foreach(Stream.scala:595)
>         at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:115)
>         at scala.collection.AbstractTraversable.count(Traversable.scala:104)
>         at 
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.createCode(GenerateUnsafeProjection.scala:312)
>         at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.doConsumeWithKeys(HashAggregateExec.scala:702)
>         at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec.doConsume(HashAggregateExec.scala:156)
>         at 
> org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:155)
>         at 
> org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:36)
>  
>  
>  
>  
> Caused by: java.lang.RuntimeException: Couldn't find here one name of column 
> following with
>   at scala.sys.package$.error(package.scala:27)
>         at 
> org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
>         at 
> 

[jira] [Comment Edited] (SPARK-31617) support drop multiple functions

2020-05-05 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17099969#comment-17099969
 ] 

jobit mathew edited comment on SPARK-31617 at 5/5/20, 2:47 PM:
---

[~hyukjin.kwon] microsoft sqlserver support and if can delete multiple 
functions using one command it can save much time.


was (Author: jobitmathew):
[~hyukjin.kwon] microsoft sqlserver support

> support drop multiple functions
> ---
>
> Key: SPARK-31617
> URL: https://issues.apache.org/jira/browse/SPARK-31617
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> postgresql support dropping of multiple functions in one command.Better spark 
> sql also can support this.
>  
> [https://www.postgresql.org/docs/12/sql-dropfunction.html]
> Drop multiple functions in one command:
> DROP FUNCTION sqrt(integer), sqrt(bigint);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-31617) support drop multiple functions

2020-05-05 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17099969#comment-17099969
 ] 

jobit mathew edited comment on SPARK-31617 at 5/5/20, 2:46 PM:
---

[~hyukjin.kwon] microsoft sqlserver support


was (Author: jobitmathew):
[~hyukjin.kwon] micrpsoft sqlserver support

> support drop multiple functions
> ---
>
> Key: SPARK-31617
> URL: https://issues.apache.org/jira/browse/SPARK-31617
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> postgresql support dropping of multiple functions in one command.Better spark 
> sql also can support this.
>  
> [https://www.postgresql.org/docs/12/sql-dropfunction.html]
> Drop multiple functions in one command:
> DROP FUNCTION sqrt(integer), sqrt(bigint);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31617) support drop multiple functions

2020-05-05 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17099969#comment-17099969
 ] 

jobit mathew commented on SPARK-31617:
--

[~hyukjin.kwon] micrpsoft sqlserver support

> support drop multiple functions
> ---
>
> Key: SPARK-31617
> URL: https://issues.apache.org/jira/browse/SPARK-31617
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> postgresql support dropping of multiple functions in one command.Better spark 
> sql also can support this.
>  
> [https://www.postgresql.org/docs/12/sql-dropfunction.html]
> Drop multiple functions in one command:
> DROP FUNCTION sqrt(integer), sqrt(bigint);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31642) Support pagination for spark structured streaming tab

2020-05-05 Thread jobit mathew (Jira)
jobit mathew created SPARK-31642:


 Summary: Support pagination for  spark structured streaming tab
 Key: SPARK-31642
 URL: https://issues.apache.org/jira/browse/SPARK-31642
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.1.0
Reporter: jobit mathew


Support pagination for spark structured streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31617) support drop multiple functions

2020-04-30 Thread jobit mathew (Jira)
jobit mathew created SPARK-31617:


 Summary: support drop multiple functions
 Key: SPARK-31617
 URL: https://issues.apache.org/jira/browse/SPARK-31617
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.1.0
Reporter: jobit mathew


postgresql support dropping of multiple functions in one command.Better spark 
sql also can support this.

 

[https://www.postgresql.org/docs/12/sql-dropfunction.html]

Drop multiple functions in one command:
DROP FUNCTION sqrt(integer), sqrt(bigint);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30693) Document STORED AS Clause of CREATE statement in SQL Reference

2020-01-31 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027441#comment-17027441
 ] 

jobit mathew commented on SPARK-30693:
--

I will work on this

> Document STORED AS Clause of CREATE statement in SQL Reference
> --
>
> Key: SPARK-30693
> URL: https://issues.apache.org/jira/browse/SPARK-30693
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30693) Document STORED AS Clause of CREATE statement in SQL Reference

2020-01-31 Thread jobit mathew (Jira)
jobit mathew created SPARK-30693:


 Summary: Document STORED AS Clause of CREATE statement in SQL 
Reference
 Key: SPARK-30693
 URL: https://issues.apache.org/jira/browse/SPARK-30693
 Project: Spark
  Issue Type: Sub-task
  Components: Documentation, SQL
Affects Versions: 2.4.4
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30635) Document PARTITIONED BY Clause of CREATE statement in SQL Reference

2020-01-24 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17022966#comment-17022966
 ] 

jobit mathew commented on SPARK-30635:
--

I will work on this

> Document PARTITIONED BY  Clause of CREATE statement in SQL Reference
> 
>
> Key: SPARK-30635
> URL: https://issues.apache.org/jira/browse/SPARK-30635
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30635) Document PARTITIONED BY Clause of CREATE statement in SQL Reference

2020-01-24 Thread jobit mathew (Jira)
jobit mathew created SPARK-30635:


 Summary: Document PARTITIONED BY  Clause of CREATE statement in 
SQL Reference
 Key: SPARK-30635
 URL: https://issues.apache.org/jira/browse/SPARK-30635
 Project: Spark
  Issue Type: Sub-task
  Components: Documentation, SQL
Affects Versions: 2.4.4
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30387) Improve YarnClientSchedulerBackend log message

2019-12-30 Thread jobit mathew (Jira)
jobit mathew created SPARK-30387:


 Summary: Improve YarnClientSchedulerBackend log message
 Key: SPARK-30387
 URL: https://issues.apache.org/jira/browse/SPARK-30387
 Project: Spark
  Issue Type: Improvement
  Components: YARN
Affects Versions: 3.0.0
Reporter: jobit mathew


ShutdownHook of 

YarnClientSchedulerBackend prints "Stopped" which can be improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30317) Spark streaming programming document updation

2019-12-20 Thread jobit mathew (Jira)
jobit mathew created SPARK-30317:


 Summary: Spark streaming programming document updation
 Key: SPARK-30317
 URL: https://issues.apache.org/jira/browse/SPARK-30317
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30233:
-
Affects Version/s: (was: 3.0.0)
   2.3.4

> Spark WebUI task table indentation  issue
> -
>
> Key: SPARK-30233
> URL: https://issues.apache.org/jira/browse/SPARK-30233
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.4
>Reporter: jobit mathew
>Priority: Minor
> Attachments: sparkopensourceissue.PNG
>
>
> !sparkopensourceissue.PNG!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30233:
-
Attachment: sparkopensourceissue.PNG

> Spark WebUI task table indentation  issue
> -
>
> Key: SPARK-30233
> URL: https://issues.apache.org/jira/browse/SPARK-30233
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
> Attachments: sparkopensourceissue.PNG
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30233:
-
Description: !sparkopensourceissue.PNG!

> Spark WebUI task table indentation  issue
> -
>
> Key: SPARK-30233
> URL: https://issues.apache.org/jira/browse/SPARK-30233
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
> Attachments: sparkopensourceissue.PNG
>
>
> !sparkopensourceissue.PNG!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30233:
-
Attachment: (was: sparkopensourceissue.PNG)

> Spark WebUI task table indentation  issue
> -
>
> Key: SPARK-30233
> URL: https://issues.apache.org/jira/browse/SPARK-30233
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
> Attachments: sparkopensourceissue.PNG
>
>
> !sparkopensourceissue.PNG!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30233:
-
Attachment: sparkopensourceissue.PNG

> Spark WebUI task table indentation  issue
> -
>
> Key: SPARK-30233
> URL: https://issues.apache.org/jira/browse/SPARK-30233
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
> Attachments: sparkopensourceissue.PNG
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30233) Spark WebUI task table indentation issue

2019-12-12 Thread jobit mathew (Jira)
jobit mathew created SPARK-30233:


 Summary: Spark WebUI task table indentation  issue
 Key: SPARK-30233
 URL: https://issues.apache.org/jira/browse/SPARK-30233
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30176) Eliminate warnings: part 6

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30176:
-
Description: 

sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
{code:java}
 Warning:Warning:line (32)java: org.apache.spark.sql.expressions.javalang.typed 
in org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (91)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (100)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (109)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (118)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
{code}
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
{code:java}
Warning:Warning:line (242)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
  df.as[Data].select(typed.sumLong((d: Data) => 
d.l)).queryExecution.toRdd.foreach(_ => ())
{code}

sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
{code:java}
Warning:Warning:line (714)method from_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(from_utc_timestamp(col("a"), "PST")),
Warning:Warning:line (719)method from_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(from_utc_timestamp(col("b"), "PST")),
Warning:Warning:line (725)method from_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
  df.select(from_utc_timestamp(col("a"), "PST")).collect()
Warning:Warning:line (737)method from_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(from_utc_timestamp(col("a"), col("c"))),
Warning:Warning:line (742)method from_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(from_utc_timestamp(col("b"), col("c"))),
Warning:Warning:line (756)method to_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(to_utc_timestamp(col("a"), "PST")),
Warning:Warning:line (761)method to_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(to_utc_timestamp(col("b"), "PST")),
Warning:Warning:line (767)method to_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
  df.select(to_utc_timestamp(col("a"), "PST")).collect()
Warning:Warning:line (779)method to_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(to_utc_timestamp(col("a"), col("c"))),
Warning:Warning:line (784)method to_utc_timestamp in object functions is 
deprecated (since 3.0.0): This function is deprecated and will be removed in 
future versions.
df.select(to_utc_timestamp(col("b"), col("c"))),
{code}
sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
{code:java}
Warning:Warning:line (241)method merge in object Row is deprecated (since 
3.0.0): This method is deprecated and will be removed in future versions.
  testData.rdd.flatMap(row => Seq.fill(16)(Row.merge(row, 
row))).collect().toSeq)
{code}
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
{code:java}
 Warning:Warning:line (787)method merge in object Row is deprecated (since 
3.0.0): This method is deprecated and will be removed in future versions.
row => Seq.fill(16)(Row.merge(row, row))).collect().toSeq)
{code}

sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala
{code:java}
 Warning:Warning:line (332)constructor ExpressionInfo in class ExpressionInfo 
is deprecated: see corresponding Javadoc for more information.
new ExpressionInfo("noClass", "myDb", "myFunction", "usage", "extended 
usage"),
Warning:Warning:line (729)constructor ExpressionInfo in class 
ExpressionInfo is deprecated: see corresponding Javadoc for more information.
new ExpressionInfo("noClass", "myDb", "myFunction2", "usage", "extended 
usage"),
  

[jira] [Updated] (SPARK-30176) Eliminate warnings: part 6

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30176:
-
Description: 

sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
{code:java}
{code}
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
{code:java}
{code}

sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
{code:java}
{code}
sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
{code:java}
{code}
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
{code:java}
{code}

sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala
{code:java}
{code}

sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
{code:java}
{code}

  was:

sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala

sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala

sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala


> Eliminate warnings: part 6
> --
>
> Key: SPARK-30176
> URL: https://issues.apache.org/jira/browse/SPARK-30176
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
>   
> sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
> {code:java}
> {code}
>   sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
> {code:java}
> {code}
>   sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
> {code:java}
> {code}
>   sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
> {code:java}
> {code}
>   sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
> {code:java}
> {code}
>   
> sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala
> {code:java}
> {code}
>   
> sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
> {code:java}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30175) Eliminate warnings: part 5

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30175:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala

{code:java}
Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is deprecated 
(since 2.4.0): Use specific logical plans like AppendData instead
  def createPlan(batchId: Long): WriteToDataSourceV2 = {
Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
deprecated (since 2.4.0): Use specific logical plans like AppendData instead
WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
{code}

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala

{code:java}
 Warning:Warning:line (703)a pure expression does nothing in statement 
position; multiline expressions might require enclosing parentheses
  q1
{code}

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala

{code:java}
Warning:Warning:line (285)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
val aggregated = inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
{code}

  was:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala


> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> {code:java}
> Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
>   def createPlan(batchId: Long): WriteToDataSourceV2 = {
> Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
> WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
> {code:java}
>  Warning:Warning:line (703)a pure expression does nothing in statement 
> position; multiline expressions might require enclosing parentheses
>   q1
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala
> {code:java}
> Warning:Warning:line (285)object typed in package scalalang is deprecated 
> (since 3.0.0): please use untyped builtin aggregate functions.
> val aggregated = 
> inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30174) Eliminate warnings :part 4

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30174:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
{code:java}
Warning:Warning:line (127)value ENABLE_JOB_SUMMARY in class ParquetOutputFormat 
is deprecated: see corresponding Javadoc for more information.
  && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
Warning:Warning:line (261)class ParquetInputSplit in package hadoop is 
deprecated: see corresponding Javadoc for more information.
new org.apache.parquet.hadoop.ParquetInputSplit(
Warning:Warning:line (272)method readFooter in class ParquetFileReader is 
deprecated: see corresponding Javadoc for more information.
ParquetFileReader.readFooter(sharedConf, filePath, 
SKIP_ROW_GROUPS).getFileMetaData
Warning:Warning:line (442)method readFooter in class ParquetFileReader is 
deprecated: see corresponding Javadoc for more information.
  ParquetFileReader.readFooter(

{code}

sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala
{code:java}

 Warning:Warning:line (91)value ENABLE_JOB_SUMMARY in class ParquetOutputFormat 
is deprecated: see corresponding Javadoc for more information.
  && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {

{code}

  was:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala

sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala


> Eliminate warnings :part 4
> --
>
> Key: SPARK-30174
> URL: https://issues.apache.org/jira/browse/SPARK-30174
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
> {code:java}
> Warning:Warning:line (127)value ENABLE_JOB_SUMMARY in class 
> ParquetOutputFormat is deprecated: see corresponding Javadoc for more 
> information.
>   && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
> Warning:Warning:line (261)class ParquetInputSplit in package hadoop is 
> deprecated: see corresponding Javadoc for more information.
> new org.apache.parquet.hadoop.ParquetInputSplit(
> Warning:Warning:line (272)method readFooter in class ParquetFileReader is 
> deprecated: see corresponding Javadoc for more information.
> ParquetFileReader.readFooter(sharedConf, filePath, 
> SKIP_ROW_GROUPS).getFileMetaData
> Warning:Warning:line (442)method readFooter in class ParquetFileReader is 
> deprecated: see corresponding Javadoc for more information.
>   ParquetFileReader.readFooter(
> {code}
> sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala
> {code:java}
>  Warning:Warning:line (91)value ENABLE_JOB_SUMMARY in class 
> ParquetOutputFormat is deprecated: see corresponding Javadoc for more 
> information.
>   && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30176) Eliminate warnings: part 6

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30176:
-
Description: 

sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala

sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala

sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala

> Eliminate warnings: part 6
> --
>
> Key: SPARK-30176
> URL: https://issues.apache.org/jira/browse/SPARK-30176
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
>   
> sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala
>   sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala
>   sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala
>   sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala
>   sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
>   
> sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala
>   
> sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30176) Eliminate warnings: part 6

2019-12-08 Thread jobit mathew (Jira)
jobit mathew created SPARK-30176:


 Summary: Eliminate warnings: part 6
 Key: SPARK-30176
 URL: https://issues.apache.org/jira/browse/SPARK-30176
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30175) Eliminate warnings: part 5

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30175:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala

> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30175) Eliminate warnings: part 5

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30175:
-
Summary: Eliminate warnings: part 5  (was: Eliminate warnings: part5)

> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30175) Eliminate warnings: part5

2019-12-08 Thread jobit mathew (Jira)
jobit mathew created SPARK-30175:


 Summary: Eliminate warnings: part5
 Key: SPARK-30175
 URL: https://issues.apache.org/jira/browse/SPARK-30175
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30174) Eliminate warnings :part 4

2019-12-08 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30174:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala

sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala

> Eliminate warnings :part 4
> --
>
> Key: SPARK-30174
> URL: https://issues.apache.org/jira/browse/SPARK-30174
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
> sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30174) Eliminate warnings :part 4

2019-12-08 Thread jobit mathew (Jira)
jobit mathew created SPARK-30174:


 Summary: Eliminate warnings :part 4
 Key: SPARK-30174
 URL: https://issues.apache.org/jira/browse/SPARK-30174
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30150) Manage resources (ADD/LIST) does not support quoted path

2019-12-06 Thread jobit mathew (Jira)
jobit mathew created SPARK-30150:


 Summary: Manage resources (ADD/LIST) does not support quoted path
 Key: SPARK-30150
 URL: https://issues.apache.org/jira/browse/SPARK-30150
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Manage resources (ADD/LIST) does not support quoted path.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30148) Optimize writing plans if there is an analysis exception

2019-12-06 Thread jobit mathew (Jira)
jobit mathew created SPARK-30148:


 Summary: Optimize writing plans if there is an analysis exception
 Key: SPARK-30148
 URL: https://issues.apache.org/jira/browse/SPARK-30148
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Optimize writing plans if there is an analysis exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-29152) Spark Executor Plugin API shutdown is not proper when dynamic allocation enabled

2019-12-06 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew reopened SPARK-29152:
--

Reopening the Jira, as the issue exists in the master branch also.

> Spark Executor Plugin API shutdown is not proper when dynamic allocation 
> enabled
> 
>
> Key: SPARK-29152
> URL: https://issues.apache.org/jira/browse/SPARK-29152
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Major
>
> *Issue Description*
> Spark Executor Plugin API *shutdown handling is not proper*, when dynamic 
> allocation enabled .Plugin's shutdown method is not processed when dynamic 
> allocation is enabled and *executors become dead* after inactive time.
> *Test Precondition*
> 1. Create a plugin and make a jar named SparkExecutorplugin.jar
> import org.apache.spark.ExecutorPlugin;
> public class ExecutoTest1 implements ExecutorPlugin{
> public void init(){
> System.out.println("Executor Plugin Initialised.");
> }
> public void shutdown(){
> System.out.println("Executor plugin closed successfully.");
> }
> }
> 2. Create the  jars with the same and put it in folder /spark/examples/jars
> *Test Steps*
> 1. launch bin/spark-sql with dynamic allocation enabled
> ./spark-sql --master yarn --conf spark.executor.plugins=ExecutoTest1  --jars 
> /opt/HA/C10/install/spark/spark/examples/jars/SparkExecutorPlugin.jar --conf 
> spark.dynamicAllocation.enabled=true --conf 
> spark.dynamicAllocation.initialExecutors=2 --conf 
> spark.dynamicAllocation.minExecutors=1
> 2 create a table , insert the data and select * from tablename
> 3.Check the spark UI Jobs tab/SQL tab
> 4. Check all Executors(executor tab will give all executors details) 
> application log file for Executor plugin Initialization and Shutdown messages 
> or operations.
> Example 
> /yarn/logdir/application_1567156749079_0025/container_e02_1567156749079_0025_01_05/
>  stdout
> 5. Wait for the executor to be dead after the inactive time and check the 
> same container log 
> 6. Kill the spark sql and check the container log  for executor plugin 
> shutdown.
> *Expect Output*
> 1. Job should be success. Create table ,insert and select query should be 
> success.
> 2.While running query All Executors  log should contain the executor plugin 
> Init messages or operations.
> "Executor Plugin Initialised.
> 3.Once the executors are dead ,shutdown message should be there in log file.
> “ Executor plugin closed successfully.
> 4.Once the sql application closed ,shutdown message should be there in log.
> “ Executor plugin closed successfully". 
> *Actual Output*
> Shutdown message is not called when executor is dead after inactive time.
> *Observation*
> Without dynamic allocation Executor plugin is working fine. But after 
> enabling dynamic allocation,Executor shutdown is not processed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30119) Support pagination for spark streaming tab

2019-12-03 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30119:
-
Affects Version/s: 3.0.0

> Support pagination for  spark streaming tab
> ---
>
> Key: SPARK-30119
> URL: https://issues.apache.org/jira/browse/SPARK-30119
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30119) Support pagination for spark streaming tab

2019-12-03 Thread jobit mathew (Jira)
jobit mathew created SPARK-30119:


 Summary: Support pagination for  spark streaming tab
 Key: SPARK-30119
 URL: https://issues.apache.org/jira/browse/SPARK-30119
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.4.4
Reporter: jobit mathew


Support pagination for spark streaming tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30099) Improve Analyzed Logical Plan as duplicate AnalysisExceptions are coming

2019-12-02 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30099:
-
Description: 
Spark SQL 
 explain extended select * from any non existing table shows duplicate 
AnalysisExceptions.
{code:java}
 spark-sql>explain extended select * from wrong

== Parsed Logical Plan ==
 'Project [*]
 +- 'UnresolvedRelation `wrong`

== Analyzed Logical Plan ==
 org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 
p
 os 31
 org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 
p
 os 31
 == Optimized Logical Plan ==
 org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 
p
 os 31
 == Physical Plan ==
 org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 
p
 os 31
 Time taken: 6.0 seconds, Fetched 1 row(s)
 19/12/02 14:33:32 INFO SparkSQLCLIDriver: Time taken: 6.0 seconds, Fetched 1 
row
 (s)
 spark-sql>
{code}

  was:
Spark SQL 
explain extended select * from  any non existing table  shows 
explain extended select * from wrong

== Parsed Logical Plan ==
'Project [*]
+- 'UnresolvedRelation `wrong`

== Analyzed Logical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
== Optimized Logical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
== Physical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
Time taken: 6.0 seconds, Fetched 1 row(s)
19/12/02 14:33:32 INFO SparkSQLCLIDriver: Time taken: 6.0 seconds, Fetched 1 row
(s)
spark-sql>



> Improve Analyzed Logical Plan as duplicate AnalysisExceptions are coming
> 
>
> Key: SPARK-30099
> URL: https://issues.apache.org/jira/browse/SPARK-30099
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Spark SQL 
>  explain extended select * from any non existing table shows duplicate 
> AnalysisExceptions.
> {code:java}
>  spark-sql>explain extended select * from wrong
> == Parsed Logical Plan ==
>  'Project [*]
>  +- 'UnresolvedRelation `wrong`
> == Analyzed Logical Plan ==
>  org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
>  os 31
>  org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
>  os 31
>  == Optimized Logical Plan ==
>  org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
>  os 31
>  == Physical Plan ==
>  org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
>  os 31
>  Time taken: 6.0 seconds, Fetched 1 row(s)
>  19/12/02 14:33:32 INFO SparkSQLCLIDriver: Time taken: 6.0 seconds, Fetched 1 
> row
>  (s)
>  spark-sql>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30099) Improve Analyzed Logical Plan as duplicate AnalysisExceptions are coming

2019-12-02 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-30099:
-
Description: 
Spark SQL 
explain extended select * from  any non existing table  shows 
explain extended select * from wrong

== Parsed Logical Plan ==
'Project [*]
+- 'UnresolvedRelation `wrong`

== Analyzed Logical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
== Optimized Logical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
== Physical Plan ==
org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 1 p
os 31
Time taken: 6.0 seconds, Fetched 1 row(s)
19/12/02 14:33:32 INFO SparkSQLCLIDriver: Time taken: 6.0 seconds, Fetched 1 row
(s)
spark-sql>


> Improve Analyzed Logical Plan as duplicate AnalysisExceptions are coming
> 
>
> Key: SPARK-30099
> URL: https://issues.apache.org/jira/browse/SPARK-30099
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Spark SQL 
> explain extended select * from  any non existing table  shows 
> explain extended select * from wrong
> == Parsed Logical Plan ==
> 'Project [*]
> +- 'UnresolvedRelation `wrong`
> == Analyzed Logical Plan ==
> org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
> os 31
> org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
> os 31
> == Optimized Logical Plan ==
> org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
> os 31
> == Physical Plan ==
> org.apache.spark.sql.AnalysisException: Table or view not found: wrong; line 
> 1 p
> os 31
> Time taken: 6.0 seconds, Fetched 1 row(s)
> 19/12/02 14:33:32 INFO SparkSQLCLIDriver: Time taken: 6.0 seconds, Fetched 1 
> row
> (s)
> spark-sql>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30099) Improve Analyzed Logical Plan as duplicate AnalysisExceptions are coming

2019-12-02 Thread jobit mathew (Jira)
jobit mathew created SPARK-30099:


 Summary: Improve Analyzed Logical Plan as duplicate 
AnalysisExceptions are coming
 Key: SPARK-30099
 URL: https://issues.apache.org/jira/browse/SPARK-30099
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29913) Improve Exception in postgreCastToBoolean

2019-11-15 Thread jobit mathew (Jira)
jobit mathew created SPARK-29913:


 Summary: Improve Exception in postgreCastToBoolean 
 Key: SPARK-29913
 URL: https://issues.apache.org/jira/browse/SPARK-29913
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Improve Exception in postgreCastToBoolean 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29887) PostgreSQL dialect: cast to smallint

2019-11-13 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973942#comment-16973942
 ] 

jobit mathew commented on SPARK-29887:
--

I will work on this 

> PostgreSQL dialect: cast to smallint
> 
>
> Key: SPARK-29887
> URL: https://issues.apache.org/jira/browse/SPARK-29887
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Make SparkSQL's cast to smallint behavior be consistent with PostgreSQL when
> spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29887) PostgreSQL dialect: cast to smallint

2019-11-13 Thread jobit mathew (Jira)
jobit mathew created SPARK-29887:


 Summary: PostgreSQL dialect: cast to smallint
 Key: SPARK-29887
 URL: https://issues.apache.org/jira/browse/SPARK-29887
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to smallint behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29846) PostgreSQL dialect: cast to char

2019-11-11 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29846:
-
Description: 
Make SparkSQL's cast to char behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.
{code:java}
spark-sql> select cast ('10.22333' as char(5));
10.22333
Time taken: 0.062 seconds, Fetched 1 row(s)
spark-sql>
{code}
*postgresql*
 select cast ('10.22333' as char(5));
 
||  ||bpchar||
|1|10.22|

  was:
Make SparkSQL's cast to char behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.
{code:java}
spark-sql> select cast ('10.22333' as char(5));
10.22333
Time taken: 0.062 seconds, Fetched 1 row(s)
spark-sql>
*postgresql*
select cast ('10.22333' as char(5));
bpchar
1   10.22


{code}


> PostgreSQL dialect: cast to char
> 
>
> Key: SPARK-29846
> URL: https://issues.apache.org/jira/browse/SPARK-29846
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Make SparkSQL's cast to char behavior be consistent with PostgreSQL when
> spark.sql.dialect is configured as PostgreSQL.
> {code:java}
> spark-sql> select cast ('10.22333' as 
> char(5));
> 10.22333
> Time taken: 0.062 seconds, Fetched 1 row(s)
> spark-sql>
> {code}
> *postgresql*
>  select cast ('10.22333' as char(5));
>  
> ||  ||bpchar||
> |1|10.22|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29846) PostgreSQL dialect: cast to char

2019-11-11 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29846:
-
Description: 
Make SparkSQL's cast to char behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.
{code:java}
spark-sql> select cast ('10.22333' as char(5));
10.22333
Time taken: 0.062 seconds, Fetched 1 row(s)
spark-sql>
*postgresql*
select cast ('10.22333' as char(5));
bpchar
1   10.22


{code}

  was:
Make SparkSQL's cast to char behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.


> PostgreSQL dialect: cast to char
> 
>
> Key: SPARK-29846
> URL: https://issues.apache.org/jira/browse/SPARK-29846
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Make SparkSQL's cast to char behavior be consistent with PostgreSQL when
> spark.sql.dialect is configured as PostgreSQL.
> {code:java}
> spark-sql> select cast ('10.22333' as 
> char(5));
> 10.22333
> Time taken: 0.062 seconds, Fetched 1 row(s)
> spark-sql>
> *postgresql*
> select cast ('10.22333' as char(5));
>   bpchar
> 1 10.22
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29846) PostgreSQL dialect: cast to char

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29846:


 Summary: PostgreSQL dialect: cast to char
 Key: SPARK-29846
 URL: https://issues.apache.org/jira/browse/SPARK-29846
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to char behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29845) PostgreSQL dialect: cast to decimal

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29845:


 Summary: PostgreSQL dialect: cast to decimal
 Key: SPARK-29845
 URL: https://issues.apache.org/jira/browse/SPARK-29845
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to decimal behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29843) PostgreSQL dialect: cast to float

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29843:


 Summary: PostgreSQL dialect: cast to float
 Key: SPARK-29843
 URL: https://issues.apache.org/jira/browse/SPARK-29843
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to float behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29842) PostgreSQL dialect: cast to double

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29842:


 Summary: PostgreSQL dialect: cast to double
 Key: SPARK-29842
 URL: https://issues.apache.org/jira/browse/SPARK-29842
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to double behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.

some examples
{code:java}
spark-sql> select CAST ('10.2' AS DOUBLE PRECISION);
Error in query:
extraneous input 'PRECISION' expecting ')'(line 1, pos 30)

== SQL ==
select CAST ('10.2' AS DOUBLE PRECISION)
--^^^

spark-sql> select CAST ('10.2' AS DOUBLE PRECISION);
Error in query:
extraneous input 'PRECISION' expecting ')'(line 1, pos 30)

== SQL ==
select CAST ('10.2' AS DOUBLE PRECISION)
--^^^

spark-sql> select CAST ('10.2' AS DOUBLE);
10.2
Time taken: 0.08 seconds, Fetched 1 row(s)
spark-sql> select CAST ('10.' AS DOUBLE);
10.
Time taken: 0.08 seconds, Fetched 1 row(s)
spark-sql> select CAST ('ff' AS DOUBLE);
NULL
Time taken: 0.08 seconds, Fetched 1 row(s)
spark-sql> select CAST ('1' AS DOUBLE);
1.1112E16
Time taken: 0.067 seconds, Fetched 1 row(s)
spark-sql> 
{code}
Postgresql

select CAST ('10.222' AS DOUBLE PRECISION);
 select CAST ('1' AS DOUBLE PRECISION);
 select CAST ('ff' AS DOUBLE PRECISION);

 
 
||  ||float8||
|1|10,222|
 
||  ||float8||
|1|1,11E+16|

Error(s), warning(s):

22P02: invalid input syntax for type double precision: "ff"

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29841) PostgreSQL dialect: cast to date

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29841:


 Summary: PostgreSQL dialect: cast to date
 Key: SPARK-29841
 URL: https://issues.apache.org/jira/browse/SPARK-29841
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to date behavior be consistent with PostgreSQL when

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29840) PostgreSQL dialect: cast to integer

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29840:


 Summary: PostgreSQL dialect: cast to integer
 Key: SPARK-29840
 URL: https://issues.apache.org/jira/browse/SPARK-29840
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to integer  behavior be consistent with PostgreSQL when 

spark.sql.dialect is configured as PostgreSQL.

Example:*currently spark sql*
{code:java}
spark-sql> select   CAST ('10C' AS INTEGER);
NULL
Time taken: 0.051 seconds, Fetched 1 row(s)
spark-sql>
{code}
*postgresql*
{code:java}
postgresql
select   CAST ('10C' AS INTEGER);
Error(s), warning(s):

22P02: invalid input syntax for integer: "10C"
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29838) PostgreSQL dialect: cast to timestamp

2019-11-11 Thread jobit mathew (Jira)
jobit mathew created SPARK-29838:


 Summary: PostgreSQL dialect: cast to timestamp
 Key: SPARK-29838
 URL: https://issues.apache.org/jira/browse/SPARK-29838
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.0.0
Reporter: jobit mathew


Make SparkSQL's cast to timestamp behavior be consistent with PostgreSQL when 

spark.sql.dialect is configured as PostgreSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29788) Fix the typo errors in the SQL reference document merges

2019-11-07 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969226#comment-16969226
 ] 

jobit mathew commented on SPARK-29788:
--

I will work on this.

> Fix the typo errors in the SQL reference document merges
> 
>
> Key: SPARK-29788
> URL: https://issues.apache.org/jira/browse/SPARK-29788
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Fix the typo errors in the SQL reference document merges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29788) Fix the typo errors in the SQL reference document merges

2019-11-07 Thread jobit mathew (Jira)
jobit mathew created SPARK-29788:


 Summary: Fix the typo errors in the SQL reference document merges
 Key: SPARK-29788
 URL: https://issues.apache.org/jira/browse/SPARK-29788
 Project: Spark
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 3.0.0
Reporter: jobit mathew


Fix the typo errors in the SQL reference document merges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29760) Document VALUES statement in SQL Reference.

2019-11-06 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16968850#comment-16968850
 ] 

jobit mathew commented on SPARK-29760:
--

[~srowen],it can be added as a part of Build a SQL reference doc
https://issues.apache.org/jira/browse/SPARK-28588

> Document VALUES statement in SQL Reference.
> ---
>
> Key: SPARK-29760
> URL: https://issues.apache.org/jira/browse/SPARK-29760
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>
> spark-sql also supports *VALUES *.
> {code:java}
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
> 1   one
> 2   two
> 3   three
> Time taken: 0.015 seconds, Fetched 3 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
> 1   one
> 2   two
> Time taken: 0.014 seconds, Fetched 2 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
> 1   one
> 3   three
> 2   two
> Time taken: 0.153 seconds, Fetched 3 row(s)
> spark-sql>
> {code}
> or even *values *can be used along with INSERT INTO or select.
> refer: https://www.postgresql.org/docs/current/sql-values.html 
> So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29775) Support truncate multiple tables

2019-11-06 Thread jobit mathew (Jira)
jobit mathew created SPARK-29775:


 Summary: Support truncate multiple tables
 Key: SPARK-29775
 URL: https://issues.apache.org/jira/browse/SPARK-29775
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 2.4.4
Reporter: jobit mathew


Spark sql Support truncate single table like 

TRUNCATE table t1;

But postgresql support truncating multiple tables like 

TRUNCATE bigtable, fattable;

So spark also can support truncating multiple tables

[https://www.postgresql.org/docs/12/sql-truncate.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29760) Document VALUES statement in SQL Reference.

2019-11-06 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967500#comment-16967500
 ] 

jobit mathew edited comment on SPARK-29760 at 11/6/19 1:43 PM:
---

[~LI,Xiao] and [~dkbiswal] could you please confirm this.

[~srowen] any comment on this?


was (Author: jobitmathew):
[~LI,Xiao] and [~dkbiswal] could you please confirm this

> Document VALUES statement in SQL Reference.
> ---
>
> Key: SPARK-29760
> URL: https://issues.apache.org/jira/browse/SPARK-29760
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>
> spark-sql also supports *VALUES *.
> {code:java}
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
> 1   one
> 2   two
> 3   three
> Time taken: 0.015 seconds, Fetched 3 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
> 1   one
> 2   two
> Time taken: 0.014 seconds, Fetched 2 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
> 1   one
> 3   three
> 2   two
> Time taken: 0.153 seconds, Fetched 3 row(s)
> spark-sql>
> {code}
> or even *values *can be used along with INSERT INTO or select.
> refer: https://www.postgresql.org/docs/current/sql-values.html 
> So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28296) Improved VALUES support

2019-11-06 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16968332#comment-16968332
 ] 

jobit mathew commented on SPARK-28296:
--

[~petertoth] there are some more valid commands supporting in postgresql, but 
which are not supporting in spark sql.

 

VALUES (1, 'one'), (2, 'two'), (3, 'three') FETCH FIRST 3 rows only;

VALUES (1, 'one'), (2, 'two'), (3, 'three') OFFSET 1 row;
 
||  ||column1||column2||
|1|1|one|
|2|2|two|
|3|3|three|
 
||  ||column1||column2||
|1|2|two|
|2|3|three|

but same in spark-sql

 

spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') FETCH FIRST 3 rows only;
Error in query:
mismatched input 'FIRST' expecting (line 1, pos 50)

== SQL ==
VALUES (1, 'one'), (2, 'two'), (3, 'three') FETCH FIRST 3 rows only
--^^^

spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') OFFSET 1 row;
Error in query:
mismatched input '1' expecting (line 1, pos 51)

== SQL ==
VALUES (1, 'one'), (2, 'two'), (3, 'three') OFFSET 1 row
---^^^

spark-sql>

 

> Improved VALUES support
> ---
>
> Key: SPARK-28296
> URL: https://issues.apache.org/jira/browse/SPARK-28296
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Peter Toth
>Priority: Major
>
> These are valid queries in PostgreSQL, but they don't work in Spark SQL:
> {noformat}
> values ((select 1));
> values ((select c from test1));
> select (values(c)) from test10;
> with cte(foo) as ( values(42) ) values((select foo from cte));
> {noformat}
> where test1 and test10:
> {noformat}
> CREATE TABLE test1 (c INTEGER);
> INSERT INTO test1 VALUES(1);
> CREATE TABLE test10 (c INTEGER);
> INSERT INTO test10 SELECT generate_sequence(1, 10);
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29760) Document VALUES statement in SQL Reference.

2019-11-05 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967500#comment-16967500
 ] 

jobit mathew edited comment on SPARK-29760 at 11/6/19 6:11 AM:
---

[~LI,Xiao] and [~dkbiswal] could you please confirm this


was (Author: jobitmathew):
[~kid_fe] and [~dkbiswal] could you please confirm this

> Document VALUES statement in SQL Reference.
> ---
>
> Key: SPARK-29760
> URL: https://issues.apache.org/jira/browse/SPARK-29760
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>
> spark-sql also supports *VALUES *.
> {code:java}
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
> 1   one
> 2   two
> 3   three
> Time taken: 0.015 seconds, Fetched 3 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
> 1   one
> 2   two
> Time taken: 0.014 seconds, Fetched 2 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
> 1   one
> 3   three
> 2   two
> Time taken: 0.153 seconds, Fetched 3 row(s)
> spark-sql>
> {code}
> or even *values *can be used along with INSERT INTO or select.
> refer: https://www.postgresql.org/docs/current/sql-values.html 
> So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29760) Document VALUES statement in SQL Reference.

2019-11-05 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967500#comment-16967500
 ] 

jobit mathew commented on SPARK-29760:
--

[~kid_fe] and [~dkbiswal] could you please confirm this

> Document VALUES statement in SQL Reference.
> ---
>
> Key: SPARK-29760
> URL: https://issues.apache.org/jira/browse/SPARK-29760
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>
> spark-sql also supports *VALUES *.
> {code:java}
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
> 1   one
> 2   two
> 3   three
> Time taken: 0.015 seconds, Fetched 3 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
> 1   one
> 2   two
> Time taken: 0.014 seconds, Fetched 2 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
> 1   one
> 3   three
> 2   two
> Time taken: 0.153 seconds, Fetched 3 row(s)
> spark-sql>
> {code}
> or even *values *can be used along with INSERT INTO or select.
> refer: https://www.postgresql.org/docs/current/sql-values.html 
> So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29760) Document VALUES statement in SQL Reference.

2019-11-05 Thread jobit mathew (Jira)
jobit mathew created SPARK-29760:


 Summary: Document VALUES statement in SQL Reference.
 Key: SPARK-29760
 URL: https://issues.apache.org/jira/browse/SPARK-29760
 Project: Spark
  Issue Type: Sub-task
  Components: Documentation, SQL
Affects Versions: 2.4.4
Reporter: jobit mathew


spark-sql also supports *VALUES *.

{code:java}
spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
1   one
2   two
3   three
Time taken: 0.015 seconds, Fetched 3 row(s)
spark-sql>

spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
1   one
2   two
Time taken: 0.014 seconds, Fetched 2 row(s)
spark-sql>
spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
1   one
3   three
2   two
Time taken: 0.153 seconds, Fetched 3 row(s)
spark-sql>
{code}
or even *values *can be used along with INSERT INTO or select.
refer: https://www.postgresql.org/docs/current/sql-values.html 

So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29741) Spark Application UI- In Environment tab add "Search" option

2019-11-04 Thread jobit mathew (Jira)
jobit mathew created SPARK-29741:


 Summary: Spark Application UI- In Environment tab add "Search" 
option
 Key: SPARK-29741
 URL: https://issues.apache.org/jira/browse/SPARK-29741
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 3.0.0
Reporter: jobit mathew


Spark Application UI- environment tab add "Search" option.

As there are different sections in Environment tab now for information's and 
properties like 
 Runtime Information,Spark Properties,Hadoop Properties, System Properties & 
Classpath Entries,better to give one *Search* field .So it wil be easy to 
search any parameter value even though we don't know in which section it will 
come.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29476) Add tooltip information for Thread Dump links and Thread details table columns in Executors Tab

2019-11-03 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29476:
-
Priority: Trivial  (was: Major)

> Add tooltip information for Thread Dump links and Thread details table 
> columns in Executors Tab
> ---
>
> Key: SPARK-29476
> URL: https://issues.apache.org/jira/browse/SPARK-29476
> Project: Spark
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Trivial
>
> I think it is better to have some tool tips in the  Executors tab specially 
> for Thread dump link[for most of the other columns tool tip is already added] 
> to explain more information what it meant like *thread dump for executors and 
> drivers*.
> And  after clicking on thread dump link ,the next page contains the *search* 
> and *thread table detail*s.
> In this page also add *tool tip* for *Search,*better to mention what and all 
> it will search like it will search the content from table including stack 
> trace details.And *tool tip* for *thread table column heading detail*s for 
> better understanding.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29476) Add tooltip information for Thread Dump links and Thread details table columns in Executors Tab

2019-11-03 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29476:
-
Description: 
I think it is better to have some tool tips in the  Executors tab specially for 
Thread dump link[for most of the other columns tool tip is already added] to 
explain more information what it meant like *thread dump for executors and 
drivers*.

And  after clicking on thread dump link ,the next page contains the *search* 
and *thread table detail*s.

In this page also add *tool tip* for *Search,*better to mention what and all it 
will search like it will search the content from table including stack trace 
details.And *tool tip* for *thread table column heading detail*s for better 
understanding.

> Add tooltip information for Thread Dump links and Thread details table 
> columns in Executors Tab
> ---
>
> Key: SPARK-29476
> URL: https://issues.apache.org/jira/browse/SPARK-29476
> Project: Spark
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: jobit mathew
>Priority: Major
>
> I think it is better to have some tool tips in the  Executors tab specially 
> for Thread dump link[for most of the other columns tool tip is already added] 
> to explain more information what it meant like *thread dump for executors and 
> drivers*.
> And  after clicking on thread dump link ,the next page contains the *search* 
> and *thread table detail*s.
> In this page also add *tool tip* for *Search,*better to mention what and all 
> it will search like it will search the content from table including stack 
> trace details.And *tool tip* for *thread table column heading detail*s for 
> better understanding.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29687) Fix jdbc metrics counter type to long

2019-10-31 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963993#comment-16963993
 ] 

jobit mathew commented on SPARK-29687:
--

Hi can you give some details about this variable where it is getting used

> Fix jdbc metrics counter type to long
> -
>
> Key: SPARK-29687
> URL: https://issues.apache.org/jira/browse/SPARK-29687
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ulysses you
>Priority: Minor
>
> JDBC metrics counter var is an int type that may by overflow. Change it to 
> Long type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29685) Spark SQL also better to show the column details while doing SELECT * from table, like sparkshell and spark beeline

2019-10-31 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29685:
-
Issue Type: Improvement  (was: Bug)

> Spark SQL also better to show the column details while doing SELECT * from 
> table, like sparkshell and spark beeline
> ---
>
> Key: SPARK-29685
> URL: https://issues.apache.org/jira/browse/SPARK-29685
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Spark SQL also better to show the column details in top while doing SELECT * 
> from table, like spark scala shell and spark beeline shows in table format.
> *Test steps*
> 1.create table table1(id int,name string,address string);
> 2.insert into table1 values (5,name1,add1);
> 3.insert into table1 values (5,name2,add2);
> 4.insert into table1 values (5,name3,add3);
> {code:java}
> spark-sql> select * from table1;
> 5   name3   add3
> 5   name1   add1
> 5   name2   add2
> But in spark scala shell & spark beeline shows the columns details also in 
> table format
> scala> sql("select * from table1").show()
> +---+-+---+
> | id| name|address|
> +---+-+---+
> |  5|name3|   add3|
> |  5|name1|   add1|
> |  5|name2|   add2|
> +---+-+---+
> scala>
> 0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
> +-++--+--+
> | id  |  name  | address  |
> +-++--+--+
> | 5   | name3  | add3 |
> | 5   | name1  | add1 |
> | 5   | name2  | add2 |
> +-++--+--+
> 3 rows selected (0.679 seconds)
> 0: jdbc:hive2://10.18.18.214:23040/default>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29685) Spark SQL also better to show the column details while doing SELECT * from table, like sparkshell and spark beeline

2019-10-31 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29685:
-
Description: 
Spark SQL also better to show the column details in top while doing SELECT * 
from table, like spark scala shell and spark beeline shows in table format.

*Test steps*
1.create table table1(id int,name string,address string);
2.insert into table1 values (5,name1,add1);
3.insert into table1 values (5,name2,add2);
4.insert into table1 values (5,name3,add3);
{code:java}
spark-sql> select * from table1;
5   name3   add3
5   name1   add1
5   name2   add2

But in spark scala shell & spark beeline shows the columns details also in 
table format

scala> sql("select * from table1").show()
+---+-+---+
| id| name|address|
+---+-+---+
|  5|name3|   add3|
|  5|name1|   add1|
|  5|name2|   add2|
+---+-+---+


scala>

0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
+-++--+--+
| id  |  name  | address  |
+-++--+--+
| 5   | name3  | add3 |
| 5   | name1  | add1 |
| 5   | name2  | add2 |
+-++--+--+
3 rows selected (0.679 seconds)
0: jdbc:hive2://10.18.18.214:23040/default>

{code}


  was:
Spark SQL also better to show the column details in top while doing SELECT * 
from table, like spark scala shell and spark beeline shows in table format.

*Test steps*
1.create table table1(id int,name string,address string);
2.insert into table1 values (5,name1,add1);
3.insert into table1 values (5,name2,add2);
4.insert into table1 values (5,name3,add3);

spark-sql> select * from table1;
5   name3   add3
5   name1   add1
5   name2   add2

But in spark scala shell & spark beeline shows the columns details also in 
table format

scala> sql("select * from table1").show()
+---+-+---+
| id| name|address|
+---+-+---+
|  5|name3|   add3|
|  5|name1|   add1|
|  5|name2|   add2|
+---+-+---+


scala>

0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
+-++--+--+
| id  |  name  | address  |
+-++--+--+
| 5   | name3  | add3 |
| 5   | name1  | add1 |
| 5   | name2  | add2 |
+-++--+--+
3 rows selected (0.679 seconds)
0: jdbc:hive2://10.18.18.214:23040/default>





> Spark SQL also better to show the column details while doing SELECT * from 
> table, like sparkshell and spark beeline
> ---
>
> Key: SPARK-29685
> URL: https://issues.apache.org/jira/browse/SPARK-29685
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Spark SQL also better to show the column details in top while doing SELECT * 
> from table, like spark scala shell and spark beeline shows in table format.
> *Test steps*
> 1.create table table1(id int,name string,address string);
> 2.insert into table1 values (5,name1,add1);
> 3.insert into table1 values (5,name2,add2);
> 4.insert into table1 values (5,name3,add3);
> {code:java}
> spark-sql> select * from table1;
> 5   name3   add3
> 5   name1   add1
> 5   name2   add2
> But in spark scala shell & spark beeline shows the columns details also in 
> table format
> scala> sql("select * from table1").show()
> +---+-+---+
> | id| name|address|
> +---+-+---+
> |  5|name3|   add3|
> |  5|name1|   add1|
> |  5|name2|   add2|
> +---+-+---+
> scala>
> 0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
> +-++--+--+
> | id  |  name  | address  |
> +-++--+--+
> | 5   | name3  | add3 |
> | 5   | name1  | add1 |
> | 5   | name2  | add2 |
> +-++--+--+
> 3 rows selected (0.679 seconds)
> 0: jdbc:hive2://10.18.18.214:23040/default>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29685) Spark SQL also better to show the column details while doing SELECT * from table, like sparkshell and spark beeline

2019-10-31 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29685:
-
Description: 
Spark SQL also better to show the column details in top while doing SELECT * 
from table, like spark scala shell and spark beeline shows in table format.

*Test steps*
1.create table table1(id int,name string,address string);
2.insert into table1 values (5,name1,add1);
3.insert into table1 values (5,name2,add2);
4.insert into table1 values (5,name3,add3);

spark-sql> select * from table1;
5   name3   add3
5   name1   add1
5   name2   add2

But in spark scala shell & spark beeline shows the columns details also in 
table format

scala> sql("select * from table1").show()
+---+-+---+
| id| name|address|
+---+-+---+
|  5|name3|   add3|
|  5|name1|   add1|
|  5|name2|   add2|
+---+-+---+


scala>

0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
+-++--+--+
| id  |  name  | address  |
+-++--+--+
| 5   | name3  | add3 |
| 5   | name1  | add1 |
| 5   | name2  | add2 |
+-++--+--+
3 rows selected (0.679 seconds)
0: jdbc:hive2://10.18.18.214:23040/default>




  was:
Spark SQL also better to show the column details in top while doing SELECT * 
from table, like sparkshell and spark beeline.

*Test steps*
1.create table table1(id int,name string,address string);
2.insert into table1 values (5,name1,add1);
3.insert into table1 values (5,name2,add2);
4.insert into table1 values (5,name3,add3);

spark-sql> select * from table1;
5   name3   add3
5   name1   add1
5   name2   add2





> Spark SQL also better to show the column details while doing SELECT * from 
> table, like sparkshell and spark beeline
> ---
>
> Key: SPARK-29685
> URL: https://issues.apache.org/jira/browse/SPARK-29685
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Minor
>
> Spark SQL also better to show the column details in top while doing SELECT * 
> from table, like spark scala shell and spark beeline shows in table format.
> *Test steps*
> 1.create table table1(id int,name string,address string);
> 2.insert into table1 values (5,name1,add1);
> 3.insert into table1 values (5,name2,add2);
> 4.insert into table1 values (5,name3,add3);
> spark-sql> select * from table1;
> 5   name3   add3
> 5   name1   add1
> 5   name2   add2
> But in spark scala shell & spark beeline shows the columns details also in 
> table format
> scala> sql("select * from table1").show()
> +---+-+---+
> | id| name|address|
> +---+-+---+
> |  5|name3|   add3|
> |  5|name1|   add1|
> |  5|name2|   add2|
> +---+-+---+
> scala>
> 0: jdbc:hive2://10.18.18.214:23040/default> select * from table1;
> +-++--+--+
> | id  |  name  | address  |
> +-++--+--+
> | 5   | name3  | add3 |
> | 5   | name1  | add1 |
> | 5   | name2  | add2 |
> +-++--+--+
> 3 rows selected (0.679 seconds)
> 0: jdbc:hive2://10.18.18.214:23040/default>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29685) Spark SQL also better to show the column details while doing SELECT * from table, like sparkshell and spark beeline

2019-10-31 Thread jobit mathew (Jira)
jobit mathew created SPARK-29685:


 Summary: Spark SQL also better to show the column details while 
doing SELECT * from table, like sparkshell and spark beeline
 Key: SPARK-29685
 URL: https://issues.apache.org/jira/browse/SPARK-29685
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.4.4, 3.0.0
Reporter: jobit mathew


Spark SQL also better to show the column details in top while doing SELECT * 
from table, like sparkshell and spark beeline.

*Test steps*
1.create table table1(id int,name string,address string);
2.insert into table1 values (5,name1,add1);
3.insert into table1 values (5,name2,add2);
4.insert into table1 values (5,name3,add3);

spark-sql> select * from table1;
5   name3   add3
5   name1   add1
5   name2   add2






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29681) Spark Application UI- environment tab field "value" sort is not working

2019-10-31 Thread jobit mathew (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963798#comment-16963798
 ] 

jobit mathew commented on SPARK-29681:
--

Issue observed in Spark Properties,System Properties.So please check in all 
properties tables

> Spark Application UI- environment tab field "value" sort is not working 
> 
>
> Key: SPARK-29681
> URL: https://issues.apache.org/jira/browse/SPARK-29681
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Major
> Attachments: Ascend.png, DESCEND.png
>
>
> Spark Application UI-
> In environment tab, field "value" sort is not working if we do sort in 
> ascending or descending order.
> !Ascend.png|width=854,height=475!
>  
> !DESCEND.png|width=854,height=470!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29681) Spark Application UI- environment tab field "value" sort is not working

2019-10-31 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29681:
-
Description: 
Spark Application UI-

In environment tab, field "value" sort is not working if we do sort in 
ascending or descending order.

!Ascend.png|width=854,height=475!

 

!DESCEND.png|width=854,height=470!

 

  was:
Spark Application UI-

In environment tab, field "value" sort is not working if we do sort in 
ascending or descending order.

 


> Spark Application UI- environment tab field "value" sort is not working 
> 
>
> Key: SPARK-29681
> URL: https://issues.apache.org/jira/browse/SPARK-29681
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Major
> Attachments: Ascend.png, DESCEND.png
>
>
> Spark Application UI-
> In environment tab, field "value" sort is not working if we do sort in 
> ascending or descending order.
> !Ascend.png|width=854,height=475!
>  
> !DESCEND.png|width=854,height=470!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29681) Spark Application UI- environment tab field "value" sort is not working

2019-10-31 Thread jobit mathew (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jobit mathew updated SPARK-29681:
-
Attachment: DESCEND.png

> Spark Application UI- environment tab field "value" sort is not working 
> 
>
> Key: SPARK-29681
> URL: https://issues.apache.org/jira/browse/SPARK-29681
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.4, 3.0.0
>Reporter: jobit mathew
>Priority: Major
> Attachments: Ascend.png, DESCEND.png
>
>
> Spark Application UI-
> In environment tab, field "value" sort is not working if we do sort in 
> ascending or descending order.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



  1   2   >