[jira] [Created] (SPARK-23474) mapWithState + async operations = no checkpointing

2018-02-20 Thread JIRA
Daniel Lanza García created SPARK-23474:
---

 Summary: mapWithState + async operations = no checkpointing
 Key: SPARK-23474
 URL: https://issues.apache.org/jira/browse/SPARK-23474
 Project: Spark
  Issue Type: Bug
  Components: DStreams
Affects Versions: 2.2.1
Reporter: Daniel Lanza García


In my Spark Streaming job I use mapWithState which obligates me to enable 
checkpointing. A job is trigger in each batch by the operation: 
stream.foreachRDD(rdd.foreachPartition()).

Under this situation the job was checkpoinitng every 10 minutes (batches of 1 
minute).

Now, I have changed the output operation to async: 
stream.foreachRDD(rdd.foreachPartitionAsync()).

But checkpointing is not taking place... I tried checkpointing the RDD which I 
map with state, it get checkpointed but does not break the lineage so tasks 
keeps growing with every batch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Description: 
Errors when Hive database name starts with a number such as 11st. 

Attached is full error message, and reproducible


  

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)

  was:
Errors when Hive database name starts with a number such as 11st.

Full error message is attached


  

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)


> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Trivial
> Attachments: spark_catalog_err.txt
>
>
> Errors when Hive database name starts with a number such as 11st. 
> Attached is full error message, and reproducible
> 
>   
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Description: 
Errors when Hive database name starts with a number such as 11st. 


  

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)

  was:
Errors when Hive database name starts with a number such as 11st. 

Attached is full error message, and reproducible


  

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)


> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Trivial
> Attachments: spark_catalog_err.txt
>
>
> Errors when Hive database name starts with a number such as 11st. 
> 
>   
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Attachment: spark_catalog_err.txt

> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Trivial
> Attachments: spark_catalog_err.txt
>
>
>  
>  
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Description: 
Errors when Hive database name starts with a number such as 11st.

Full error message is attached


  

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)

  was:
 

 

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)


> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Trivial
> Attachments: spark_catalog_err.txt
>
>
> Errors when Hive database name starts with a number such as 11st.
> Full error message is attached
> 
>   
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Description: 
 

 

scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
 18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)

  was:
scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)


> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Minor
>
>  
>  
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goun Na updated SPARK-23473:

Priority: Trivial  (was: Minor)

> spark.catalog.listTables error when database name starts with a number
> --
>
> Key: SPARK-23473
> URL: https://issues.apache.org/jira/browse/SPARK-23473
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.0
>Reporter: Goun Na
>Priority: Trivial
>
>  
>  
> scala> spark.catalog.setCurrentDatabase("11st")
> scala> spark.catalog.listTables
> scala> spark.catalog.listTables
>  18/02/21 15:47:44 ERROR log: error in initSerDe: 
> java.lang.ClassNotFoundException Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
>  at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
>  at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
>  at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
>  at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
>  at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
>  at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23473) spark.catalog.listTables error when database name starts with a number

2018-02-20 Thread Goun Na (JIRA)
Goun Na created SPARK-23473:
---

 Summary: spark.catalog.listTables error when database name starts 
with a number
 Key: SPARK-23473
 URL: https://issues.apache.org/jira/browse/SPARK-23473
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.1.0
Reporter: Goun Na


scala> spark.catalog.setCurrentDatabase("11st")

scala> spark.catalog.listTables

scala> spark.catalog.listTables
18/02/21 15:47:44 ERROR log: error in initSerDe: 
java.lang.ClassNotFoundException Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
 at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385)
 at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
 at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
 at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$10.apply(HiveClientImpl.scala:365)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23303) improve the explain result for data source v2 relations

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370992#comment-16370992
 ] 

Apache Spark commented on SPARK-23303:
--

User 'cloud-fan' has created a pull request for this issue:
https://github.com/apache/spark/pull/20647

> improve the explain result for data source v2 relations
> ---
>
> Key: SPARK-23303
> URL: https://issues.apache.org/jira/browse/SPARK-23303
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Wenchen Fan
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2018-02-20 Thread Cody Koeninger (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370924#comment-16370924
 ] 

Cody Koeninger commented on SPARK-18057:


Just doing the upgrade is probably a good starting point for any
potential new contributor who's interested in Kafka.

Agreed that time indexes would be a great thing to take advantage of
after that, happy to help out with either.

On Tue, Feb 20, 2018 at 6:17 PM, Michael Armbrust (JIRA)


> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>Priority: Major
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23463) Filter operation fails to handle blank values and evicts rows that even satisfy the filtering condition

2018-02-20 Thread Manan Bakshi (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370917#comment-16370917
 ] 

Manan Bakshi commented on SPARK-23463:
--

Hi Marco,

That makes sense. However, this same code used to work fine for Spark 2.1.1 
regardless of whether you compare against 0 or 0.0. Can you help me understand 
what changed?

> Filter operation fails to handle blank values and evicts rows that even 
> satisfy the filtering condition
> ---
>
> Key: SPARK-23463
> URL: https://issues.apache.org/jira/browse/SPARK-23463
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.2.1
>Reporter: Manan Bakshi
>Priority: Critical
> Attachments: sample
>
>
> Filter operations were updated in Spark 2.2.0. Cost Based Optimizer was 
> introduced to look at the table stats and decide filter selectivity. However, 
> since then, filter has started behaving unexpectedly for blank values. The 
> operation would not only drop columns with blank values but also filter out 
> rows that actually meet the filter criteria.
> Steps to repro
> Consider a simple dataframe with some blank values as below:
> ||dev||val||
> |ALL|0.01|
> |ALL|0.02|
> |ALL|0.004|
> |ALL| |
> |ALL|2.5|
> |ALL|4.5|
> |ALL|45|
> Running a simple filter operation over val column in this dataframe yields 
> unexpected results. For eg. the following query returned an empty dataframe:
> df.filter(df["val"] > 0)
> ||dev||val||
> However, the filter operation works as expected if 0 in filter condition is 
> replaced by float 0.0
> df.filter(df["val"] > 0.0)
> ||dev||val||
> |ALL|0.01|
> |ALL|0.02|
> |ALL|0.004|
> |ALL|2.5|
> |ALL|4.5|
> |ALL|45|
>  
> Note that this bug only exists in Spark 2.2.0 and later. The previous 
> versions filter as expected for both int (0) and float (0.0) values in the 
> filter condition.
> Also, if there are no blank values, the filter operation works as expected 
> for all versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23424) Add codegenStageId in comment

2018-02-20 Thread Wenchen Fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-23424.
-
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20612
[https://github.com/apache/spark/pull/20612]

> Add codegenStageId in comment
> -
>
> Key: SPARK-23424
> URL: https://issues.apache.org/jira/browse/SPARK-23424
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Kazuaki Ishizaki
>Assignee: Kazuaki Ishizaki
>Priority: Minor
> Fix For: 2.4.0
>
>
> SPARK-23032 introduced to use a per-query ID to the generated class name in 
> {{WholeStageCodegenExec.}}This Jira also add the ID in the comment of the 
> generated Java source file.
> This is helpful for debugging when {{spark.sql.codegen.useIdInClassName}} is 
> false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23424) Add codegenStageId in comment

2018-02-20 Thread Wenchen Fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan reassigned SPARK-23424:
---

Assignee: Kazuaki Ishizaki

> Add codegenStageId in comment
> -
>
> Key: SPARK-23424
> URL: https://issues.apache.org/jira/browse/SPARK-23424
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Kazuaki Ishizaki
>Assignee: Kazuaki Ishizaki
>Priority: Minor
> Fix For: 2.4.0
>
>
> SPARK-23032 introduced to use a per-query ID to the generated class name in 
> {{WholeStageCodegenExec.}}This Jira also add the ID in the comment of the 
> generated Java source file.
> This is helpful for debugging when {{spark.sql.codegen.useIdInClassName}} is 
> false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23053) taskBinarySerialization and task partitions calculate in DagScheduler.submitMissingTasks should keep the same RDD checkpoint status

2018-02-20 Thread Imran Rashid (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Imran Rashid updated SPARK-23053:
-
Fix Version/s: 2.1.3

> taskBinarySerialization and task partitions calculate in 
> DagScheduler.submitMissingTasks should keep the same RDD checkpoint status
> ---
>
> Key: SPARK-23053
> URL: https://issues.apache.org/jira/browse/SPARK-23053
> Project: Spark
>  Issue Type: Bug
>  Components: Scheduler, Spark Core
>Affects Versions: 2.1.0
>Reporter: huangtengfei
>Assignee: huangtengfei
>Priority: Major
> Fix For: 2.1.3, 2.2.2, 2.3.1, 2.4.0
>
>
> When we run concurrent jobs using the same rdd which is marked to do 
> checkpoint. If one job has finished running the job, and start the process of 
> RDD.doCheckpoint, while another job is submitted, then submitStage and 
> submitMissingTasks will be called. In 
> [submitMissingTasks|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L961],
>  will serialize taskBinaryBytes and calculate task partitions which are both 
> affected by the status of checkpoint, if the former is calculated before 
> doCheckpoint finished, while the latter is calculated after doCheckpoint 
> finished, when run task, rdd.compute will be called, for some rdds with 
> particular partition type such as 
> [MapWithStateRDD|https://github.com/apache/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/rdd/MapWithStateRDD.scala]
>  who will do partition type cast, will get a ClassCastException because the 
> part params is actually a CheckpointRDDPartition.
> This error occurs because rdd.doCheckpoint occurs in the same thread that 
> called sc.runJob, while the task serialization occurs in the DAGSchedulers 
> event loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23454) Add Trigger information to the Structured Streaming programming guide

2018-02-20 Thread Tathagata Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tathagata Das resolved SPARK-23454.
---
   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0

Issue resolved by pull request 20631
[https://github.com/apache/spark/pull/20631]

> Add Trigger information to the Structured Streaming programming guide
> -
>
> Key: SPARK-23454
> URL: https://issues.apache.org/jira/browse/SPARK-23454
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation, Structured Streaming
>Affects Versions: 2.3.0
>Reporter: Tathagata Das
>Assignee: Tathagata Das
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Marcelo Vanzin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-23468.

   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 20643
[https://github.com/apache/spark/pull/20643]

> Failure to authenticate with old shuffle service
> 
>
> Key: SPARK-23468
> URL: https://issues.apache.org/jira/browse/SPARK-23468
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Fix For: 2.3.0
>
>
> I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
> with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
> every once in a while:
> {noformat}
> org.apache.spark.SparkException: Unable to register with external shuffle 
> server due to : java.lang.RuntimeException: 
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
> at 
> org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
> at 
> org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
> at 
> org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
> {noformat}
> This is a regression in 2.3 and I have a fix for it as part of the fix for 
> SPARK-23361. I'm filing this separately so that this particular fix is 
> backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Marcelo Vanzin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin reassigned SPARK-23468:
--

Assignee: Marcelo Vanzin

> Failure to authenticate with old shuffle service
> 
>
> Key: SPARK-23468
> URL: https://issues.apache.org/jira/browse/SPARK-23468
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Fix For: 2.3.0
>
>
> I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
> with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
> every once in a while:
> {noformat}
> org.apache.spark.SparkException: Unable to register with external shuffle 
> server due to : java.lang.RuntimeException: 
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
> at 
> org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
> at 
> org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
> at 
> org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
> {noformat}
> This is a regression in 2.3 and I have a fix for it as part of the fix for 
> SPARK-23361. I'm filing this separately so that this particular fix is 
> backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Sameer Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sameer Agarwal resolved SPARK-23470.

   Resolution: Fixed
 Assignee: Marcelo Vanzin
Fix Version/s: 2.3.0

Issue resolved by https://github.com/apache/spark/pull/20644

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Assignee: Marcelo Vanzin
>Priority: Blocker
> Fix For: 2.3.0
>
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" when 
> accessing All Jobs page. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(Conte

[jira] [Commented] (SPARK-23369) HiveClientSuites fails with unresolved dependency

2018-02-20 Thread Jose Torres (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370832#comment-16370832
 ] 

Jose Torres commented on SPARK-23369:
-

Still seeing this in e.g. https://github.com/apache/spark/pull/20646

> HiveClientSuites fails with unresolved dependency
> -
>
> Key: SPARK-23369
> URL: https://issues.apache.org/jira/browse/SPARK-23369
> Project: Spark
>  Issue Type: Test
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Wenchen Fan
>Priority: Major
>
> I saw it multiple times in PR builders. The error message is:
>  
> {code:java}
> sbt.ForkMain$ForkError: java.lang.RuntimeException: [unresolved dependency: 
> com.sun.jersey#jersey-json;1.14: configuration not found in 
> com.sun.jersey#jersey-json;1.14: 'master(compile)'. Missing configuration: 
> 'compile'. It was required from org.apache.hadoop#hadoop-yarn-common;2.6.5 
> compile] at 
> org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1270)
>  at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader$$anonfun$2.apply(IsolatedClientLoader.scala:113)
>  at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader$$anonfun$2.apply(IsolatedClientLoader.scala:113)
>  at org.apache.spark.sql.catalyst.util.package$.quietly(package.scala:42) at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader$.downloadVersion(IsolatedClientLoader.scala:112)
>  at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader$.liftedTree1$1(IsolatedClientLoader.scala:74)
>  at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader$.forVersion(IsolatedClientLoader.scala:62)
>  at 
> org.apache.spark.sql.hive.client.HiveClientBuilder$.buildClient(HiveClientBuilder.scala:51)
>  at 
> org.apache.spark.sql.hive.client.HiveVersionSuite.buildClient(HiveVersionSuite.scala:41)
>  at 
> org.apache.spark.sql.hive.client.HiveClientSuite.org$apache$spark$sql$hive$client$HiveClientSuite$$init(HiveClientSuite.scala:48)
>  at 
> org.apache.spark.sql.hive.client.HiveClientSuite.beforeAll(HiveClientSuite.scala:71)
>  at 
> org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:212)
>  at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210) at 
> org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:52) at 
> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210) at 
> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257) at 
> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255) at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
> org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255) at 
> org.apache.spark.sql.hive.client.HiveClientSuites.runNestedSuites(HiveClientSuites.scala:24)
>  at org.scalatest.Suite$class.run(Suite.scala:1144) at 
> org.apache.spark.sql.hive.client.HiveClientSuites.run(HiveClientSuites.scala:24)
>  at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
>  at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480) 
> at sbt.ForkMain$Run$2.call(ForkMain.java:296) at 
> sbt.ForkMain$Run$2.call(ForkMain.java:286) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2018-02-20 Thread Michael Armbrust (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370780#comment-16370780
 ] 

Michael Armbrust commented on SPARK-18057:
--

+1 to upgrading and it would also be great to add support for any new features 
(i.e. starting a query based on the time index rather than a specific offset).

I personally don't think that fixing KAFKA-4897 is mandatory, but keeping our 
stress tests running without hanging or losing coverage is.

Regarding naming, I'd probably just stop changing the name and say that 
"kafka-0-10-sql" works with any broker that is 0.10.0+.  We could also get rid 
of it, but that seems like an unnecessary change to me that just causes 
unnecessary pain to existing users.

> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>Priority: Major
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23434) Spark should not warn `metadata directory` for a HDFS file path

2018-02-20 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu reassigned SPARK-23434:


Assignee: Dongjoon Hyun

> Spark should not warn `metadata directory` for a HDFS file path
> ---
>
> Key: SPARK-23434
> URL: https://issues.apache.org/jira/browse/SPARK-23434
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2, 2.1.2, 2.2.1, 2.3.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 2.4.0
>
>
> In a kerberized cluster, when Spark reads a file path (e.g. `people.json`), 
> it warns with a wrong error message during looking up 
> `people.json/_spark_metadata`. The root cause of this istuation is the 
> difference between `LocalFileSystem` and `DistributedFileSystem`. 
> `LocalFileSystem.exists()` returns `false`, but 
> `DistributedFileSystem.exists` raises Exception.
> {code}
> scala> spark.version
> res0: String = 2.4.0-SNAPSHOT
> scala> 
> spark.read.json("file:///usr/hdp/current/spark-client/examples/src/main/resources/people.json").show
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
> scala> spark.read.json("hdfs:///tmp/people.json")
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
> metadata directory.
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
> metadata directory.
> res6: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
> {code}
> {code}
> scala> spark.version
> res0: String = 2.2.1
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
> directory.
> 18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
> directory.
> {code}
> {code}
> scala> spark.version
> res0: String = 2.1.2
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:29:53 WARN DataSource: Error while looking for metadata directory.
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
> {code}
> {code}
> scala> spark.version
> res0: String = 2.0.2
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:25:24 WARN DataSource: Error while looking for metadata directory.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23434) Spark should not warn `metadata directory` for a HDFS file path

2018-02-20 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu resolved SPARK-23434.
--
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20616
[https://github.com/apache/spark/pull/20616]

> Spark should not warn `metadata directory` for a HDFS file path
> ---
>
> Key: SPARK-23434
> URL: https://issues.apache.org/jira/browse/SPARK-23434
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2, 2.1.2, 2.2.1, 2.3.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Fix For: 2.4.0
>
>
> In a kerberized cluster, when Spark reads a file path (e.g. `people.json`), 
> it warns with a wrong error message during looking up 
> `people.json/_spark_metadata`. The root cause of this istuation is the 
> difference between `LocalFileSystem` and `DistributedFileSystem`. 
> `LocalFileSystem.exists()` returns `false`, but 
> `DistributedFileSystem.exists` raises Exception.
> {code}
> scala> spark.version
> res0: String = 2.4.0-SNAPSHOT
> scala> 
> spark.read.json("file:///usr/hdp/current/spark-client/examples/src/main/resources/people.json").show
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
> scala> spark.read.json("hdfs:///tmp/people.json")
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
> metadata directory.
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
> metadata directory.
> res6: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
> {code}
> {code}
> scala> spark.version
> res0: String = 2.2.1
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
> directory.
> 18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
> directory.
> {code}
> {code}
> scala> spark.version
> res0: String = 2.1.2
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:29:53 WARN DataSource: Error while looking for metadata directory.
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
> {code}
> {code}
> scala> spark.version
> res0: String = 2.0.2
> scala> spark.read.json("hdfs:///tmp/people.json").show
> 18/02/15 05:25:24 WARN DataSource: Error while looking for metadata directory.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23408) Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23408:


Assignee: (was: Apache Spark)

> Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right
> -
>
> Key: SPARK-23408
> URL: https://issues.apache.org/jira/browse/SPARK-23408
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.0
>Reporter: Marcelo Vanzin
>Priority: Minor
>
> Seen on an unrelated PR.
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/87386/testReport/org.apache.spark.sql.streaming/StreamingOuterJoinSuite/left_outer_early_state_exclusion_on_right/
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Assert on query failed: Check total state rows = List(4), updated state rows 
> = List(4): Array(1) did not equal List(4) incorrect updates rows
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
>   org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:28)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:23)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1$$anonfun$apply$14.apply$mcZ$sp(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$class.verify$1(StreamTest.scala:371)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:432)
>   
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> == Progress ==
>AddData to MemoryStream[value#19652]: 3,4,5
>AddData to MemoryStream[value#19662]: 1,2,3
>CheckLastBatch: [3,10,6,9]
> => AssertOnQuery(, Check total state rows = List(4), updated state 
> rows = List(4))
>AddData to MemoryStream[value#19652]: 20
>AddData to MemoryStream[value#19662]: 21
>CheckLastBatch: 
>AddData to MemoryStream[value#19662]: 20
>CheckLastBatch: [20,30,40,60],[4,10,8,null],[5,10,10,null]
> == Stream ==
> Output Mode: Append
> Stream state: {MemoryStream[value#19652]: 0,MemoryStream[value#19662]: 0}
> Thread state: alive
> Thread stack trace: java.lang.Thread.sleep(Native Method)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:152)
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:120)
> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
> {noformat}
> No other failures in the history, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23408) Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23408:


Assignee: Apache Spark

> Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right
> -
>
> Key: SPARK-23408
> URL: https://issues.apache.org/jira/browse/SPARK-23408
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.0
>Reporter: Marcelo Vanzin
>Assignee: Apache Spark
>Priority: Minor
>
> Seen on an unrelated PR.
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/87386/testReport/org.apache.spark.sql.streaming/StreamingOuterJoinSuite/left_outer_early_state_exclusion_on_right/
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Assert on query failed: Check total state rows = List(4), updated state rows 
> = List(4): Array(1) did not equal List(4) incorrect updates rows
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
>   org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:28)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:23)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1$$anonfun$apply$14.apply$mcZ$sp(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$class.verify$1(StreamTest.scala:371)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:432)
>   
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> == Progress ==
>AddData to MemoryStream[value#19652]: 3,4,5
>AddData to MemoryStream[value#19662]: 1,2,3
>CheckLastBatch: [3,10,6,9]
> => AssertOnQuery(, Check total state rows = List(4), updated state 
> rows = List(4))
>AddData to MemoryStream[value#19652]: 20
>AddData to MemoryStream[value#19662]: 21
>CheckLastBatch: 
>AddData to MemoryStream[value#19662]: 20
>CheckLastBatch: [20,30,40,60],[4,10,8,null],[5,10,10,null]
> == Stream ==
> Output Mode: Append
> Stream state: {MemoryStream[value#19652]: 0,MemoryStream[value#19662]: 0}
> Thread state: alive
> Thread stack trace: java.lang.Thread.sleep(Native Method)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:152)
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:120)
> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
> {noformat}
> No other failures in the history, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23408) Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370761#comment-16370761
 ] 

Apache Spark commented on SPARK-23408:
--

User 'jose-torres' has created a pull request for this issue:
https://github.com/apache/spark/pull/20646

> Flaky test: StreamingOuterJoinSuite.left outer early state exclusion on right
> -
>
> Key: SPARK-23408
> URL: https://issues.apache.org/jira/browse/SPARK-23408
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.0
>Reporter: Marcelo Vanzin
>Priority: Minor
>
> Seen on an unrelated PR.
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/87386/testReport/org.apache.spark.sql.streaming/StreamingOuterJoinSuite/left_outer_early_state_exclusion_on_right/
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Assert on query failed: Check total state rows = List(4), updated state rows 
> = List(4): Array(1) did not equal List(4) incorrect updates rows
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
>   org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:28)
>   
> org.apache.spark.sql.streaming.StateStoreMetricsTest$$anonfun$assertNumStateRows$1.apply(StateStoreMetricsTest.scala:23)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1$$anonfun$apply$14.apply$mcZ$sp(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$class.verify$1(StreamTest.scala:371)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:568)
>   
> org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:432)
>   
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> == Progress ==
>AddData to MemoryStream[value#19652]: 3,4,5
>AddData to MemoryStream[value#19662]: 1,2,3
>CheckLastBatch: [3,10,6,9]
> => AssertOnQuery(, Check total state rows = List(4), updated state 
> rows = List(4))
>AddData to MemoryStream[value#19652]: 20
>AddData to MemoryStream[value#19662]: 21
>CheckLastBatch: 
>AddData to MemoryStream[value#19662]: 20
>CheckLastBatch: [20,30,40,60],[4,10,8,null],[5,10,10,null]
> == Stream ==
> Output Mode: Append
> Stream state: {MemoryStream[value#19652]: 0,MemoryStream[value#19662]: 0}
> Thread state: alive
> Thread stack trace: java.lang.Thread.sleep(Native Method)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:152)
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
> org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:120)
> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
> {noformat}
> No other failures in the history, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23434) Spark should not warn `metadata directory` for a HDFS file path

2018-02-20 Thread Dongjoon Hyun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-23434:
--
Description: 
In a kerberized cluster, when Spark reads a file path (e.g. `people.json`), it 
warns with a wrong error message during looking up 
`people.json/_spark_metadata`. The root cause of this istuation is the 
difference between `LocalFileSystem` and `DistributedFileSystem`. 
`LocalFileSystem.exists()` returns `false`, but `DistributedFileSystem.exists` 
raises Exception.

{code}
scala> spark.version
res0: String = 2.4.0-SNAPSHOT

scala> 
spark.read.json("file:///usr/hdp/current/spark-client/examples/src/main/resources/people.json").show
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+

scala> spark.read.json("hdfs:///tmp/people.json")
18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
metadata directory.
18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
metadata directory.
res6: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
{code}

{code}
scala> spark.version
res0: String = 2.2.1

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
directory.
18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
directory.
{code}

{code}
scala> spark.version
res0: String = 2.1.2

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:29:53 WARN DataSource: Error while looking for metadata directory.
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+
{code}

{code}
scala> spark.version
res0: String = 2.0.2

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:25:24 WARN DataSource: Error while looking for metadata directory.
{code}

  was:
When Spark reads a file path (e.g. `people.json`), it warns with a wrong error 
message during looking up `people.json/_spark_metadata`. The root cause of this 
istuation is the difference between `LocalFileSystem` and 
`DistributedFileSystem`. `LocalFileSystem.exists()` returns `false`, but 
`DistributedFileSystem.exists` raises Exception.

{code}
scala> spark.version
res0: String = 2.4.0-SNAPSHOT

scala> 
spark.read.json("file:///usr/hdp/current/spark-client/examples/src/main/resources/people.json").show
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+

scala> spark.read.json("hdfs:///tmp/people.json")
18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
metadata directory.
18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
metadata directory.
res6: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
{code}

{code}
scala> spark.version
res0: String = 2.2.1

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
directory.
18/02/15 05:28:02 WARN FileStreamSink: Error while looking for metadata 
directory.
{code}

{code}
scala> spark.version
res0: String = 2.1.2

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:29:53 WARN DataSource: Error while looking for metadata directory.
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+
{code}

{code}
scala> spark.version
res0: String = 2.0.2

scala> spark.read.json("hdfs:///tmp/people.json").show
18/02/15 05:25:24 WARN DataSource: Error while looking for metadata directory.
{code}


> Spark should not warn `metadata directory` for a HDFS file path
> ---
>
> Key: SPARK-23434
> URL: https://issues.apache.org/jira/browse/SPARK-23434
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2, 2.1.2, 2.2.1, 2.3.0
>Reporter: Dongjoon Hyun
>Priority: Major
>
> In a kerberized cluster, when Spark reads a file path (e.g. `people.json`), 
> it warns with a wrong error message during looking up 
> `people.json/_spark_metadata`. The root cause of this istuation is the 
> difference between `LocalFileSystem` and `DistributedFileSystem`. 
> `LocalFileSystem.exists()` returns `false`, but 
> `DistributedFileSystem.exists` raises Exception.
> {code}
> scala> spark.version
> res0: String = 2.4.0-SNAPSHOT
> scala> 
> spark.read.json("file:///usr/hdp/current/spark-client/examples/src/main/resources/people.json").show
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
> scala> spark.read.json("hdfs:///tmp/people.json")
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 
> metadata directory.
> 18/02/15 05:00:48 WARN streaming.FileStreamSink: Error while looking for 

[jira] [Commented] (SPARK-23472) Add config properties for administrator JVM options

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370659#comment-16370659
 ] 

Apache Spark commented on SPARK-23472:
--

User 'rdblue' has created a pull request for this issue:
https://github.com/apache/spark/pull/20645

> Add config properties for administrator JVM options
> ---
>
> Key: SPARK-23472
> URL: https://issues.apache.org/jira/browse/SPARK-23472
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Ryan Blue
>Priority: Major
>
> In our environment, users may need to add JVM options to their Spark 
> applications (e.g. to override log configuration). They typically use 
> {{--driver-java-options}} or {{spark.executor.extraJavaOptions}}. Both set 
> extraJavaOptions properties. We also have a set of administrator JVM options 
> to apply that set the garbage collector (G1GC) and kill the driver JVM on OOM.
> These two use cases both need to set extraJavaOptions properties, but will 
> clobber one another. In the past we've maintained wrapper scripts, but this 
> causes our default properties to be maintained in scripts rather than our 
> spark-defaults.properties.
> I think we should add defaultJavaOptions properties that are added along with 
> extraJavaOptions. Administrators could set defaultJavaOptions and these would 
> always get added to the JVM command line, along with any user options instead 
> of getting overwritten by user options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23472) Add config properties for administrator JVM options

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23472:


Assignee: (was: Apache Spark)

> Add config properties for administrator JVM options
> ---
>
> Key: SPARK-23472
> URL: https://issues.apache.org/jira/browse/SPARK-23472
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Ryan Blue
>Priority: Major
>
> In our environment, users may need to add JVM options to their Spark 
> applications (e.g. to override log configuration). They typically use 
> {{--driver-java-options}} or {{spark.executor.extraJavaOptions}}. Both set 
> extraJavaOptions properties. We also have a set of administrator JVM options 
> to apply that set the garbage collector (G1GC) and kill the driver JVM on OOM.
> These two use cases both need to set extraJavaOptions properties, but will 
> clobber one another. In the past we've maintained wrapper scripts, but this 
> causes our default properties to be maintained in scripts rather than our 
> spark-defaults.properties.
> I think we should add defaultJavaOptions properties that are added along with 
> extraJavaOptions. Administrators could set defaultJavaOptions and these would 
> always get added to the JVM command line, along with any user options instead 
> of getting overwritten by user options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23472) Add config properties for administrator JVM options

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23472:


Assignee: Apache Spark

> Add config properties for administrator JVM options
> ---
>
> Key: SPARK-23472
> URL: https://issues.apache.org/jira/browse/SPARK-23472
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Ryan Blue
>Assignee: Apache Spark
>Priority: Major
>
> In our environment, users may need to add JVM options to their Spark 
> applications (e.g. to override log configuration). They typically use 
> {{--driver-java-options}} or {{spark.executor.extraJavaOptions}}. Both set 
> extraJavaOptions properties. We also have a set of administrator JVM options 
> to apply that set the garbage collector (G1GC) and kill the driver JVM on OOM.
> These two use cases both need to set extraJavaOptions properties, but will 
> clobber one another. In the past we've maintained wrapper scripts, but this 
> causes our default properties to be maintained in scripts rather than our 
> spark-defaults.properties.
> I think we should add defaultJavaOptions properties that are added along with 
> extraJavaOptions. Administrators could set defaultJavaOptions and these would 
> always get added to the JVM command line, along with any user options instead 
> of getting overwritten by user options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23472) Add config properties for administrator JVM options

2018-02-20 Thread Ryan Blue (JIRA)
Ryan Blue created SPARK-23472:
-

 Summary: Add config properties for administrator JVM options
 Key: SPARK-23472
 URL: https://issues.apache.org/jira/browse/SPARK-23472
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: Ryan Blue


In our environment, users may need to add JVM options to their Spark 
applications (e.g. to override log configuration). They typically use 
{{--driver-java-options}} or {{spark.executor.extraJavaOptions}}. Both set 
extraJavaOptions properties. We also have a set of administrator JVM options to 
apply that set the garbage collector (G1GC) and kill the driver JVM on OOM.

These two use cases both need to set extraJavaOptions properties, but will 
clobber one another. In the past we've maintained wrapper scripts, but this 
causes our default properties to be maintained in scripts rather than our 
spark-defaults.properties.

I think we should add defaultJavaOptions properties that are added along with 
extraJavaOptions. Administrators could set defaultJavaOptions and these would 
always get added to the JVM command line, along with any user options instead 
of getting overwritten by user options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2018-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370642#comment-16370642
 ] 

Sönke Liebau commented on SPARK-18057:
--

I think in addition to the naming convention issue there was also the question 
of whether or not to wait for 
[KAFKA-4897|https://issues.apache.org/jira/browse/KAFKA-4879] which as I 
understand it causes a stress test in Spark to hang indefinitely. However I do 
not think that that ticket will get fixed anytime soon, its currently assigned 
to the 2.0 release which I believe is scheduled to land around October, but 
that will change of course if no one works on it..
Since repeatedly deleting an recreating topics is not really a common use case 
I'd be in favor of moving forward with updating the version regardless.

On the naming convention, as [~guozhang] mentioned, with 
[KAFKA-4462|https://issues.apache.org/jira/browse/KAFKA-4462] now merged the 
strict dependence on binary protocol versions that Kafka used to impose have 
been lifted to a large degree, so I think an argument could be made that we 
don't need the Kafka version in the artifact name any more.

> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>Priority: Major
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15060) Fix stack overflow when executing long lineage transform without checkpoint

2018-02-20 Thread Anil Villait (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370637#comment-16370637
 ] 

Anil Villait commented on SPARK-15060:
--

Is there a workaround to this problem by increasing the stack or heap size ?

> Fix stack overflow when executing long lineage transform without checkpoint
> ---
>
> Key: SPARK-15060
> URL: https://issues.apache.org/jira/browse/SPARK-15060
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 1.5.2, 1.6.1, 2.0.0
>Reporter: Zheng Tan
>Priority: Major
>
> When executing long linage rdd transform, it is easy to get stack overflow 
> exception in driver end. This can be reproduced by the following example:
> var rdd = sc.makeRDD(1 to 10, 10)
> for (_ <- 1 to 1000) {
>   rdd = rdd.map(x => x)
> }
> rdd.reduce(_ + _)
> SPARK-5955 solve this problem by checkpointing rdd for every 10~20 rounds. It 
> is not so convenient since it required checkpointing data to HDFS. 
> Another solution is cutting off the recursive rdd dependencies in driver end 
> and re-assembly them in executor end.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23470:


Assignee: Apache Spark

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Assignee: Apache Spark
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" when 
> accessing All Jobs page. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.han

[jira] [Assigned] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23470:


Assignee: (was: Apache Spark)

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" when 
> accessing All Jobs page. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doSco

[jira] [Commented] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370632#comment-16370632
 ] 

Apache Spark commented on SPARK-23470:
--

User 'vanzin' has created a pull request for this issue:
https://github.com/apache/spark/pull/20644

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" when 
> accessing All Jobs page. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.Se

[jira] [Created] (SPARK-23471) RandomForestClassificationModel save() - incorrect metadata

2018-02-20 Thread Keepun (JIRA)
Keepun created SPARK-23471:
--

 Summary: RandomForestClassificationModel save() - incorrect 
metadata
 Key: SPARK-23471
 URL: https://issues.apache.org/jira/browse/SPARK-23471
 Project: Spark
  Issue Type: Bug
  Components: ML
Affects Versions: 2.2.1
Reporter: Keepun


RandomForestClassificationMode.load() does not work after save():

 
{code:java}
RandomForestClassifier rf = new RandomForestClassifier()
.setFeaturesCol("features")
.setLabelCol("result")
.setNumTrees(100)
.setMaxDepth(30)
.setMinInstancesPerNode(1)
//.setCacheNodeIds(true)
.setMaxMemoryInMB(500)
.setSeed(System.currentTimeMillis() + System.nanoTime());
RandomForestClassificationModel rfmodel = rf.train(data);
   try {
  rfmodel.save(args[2] + "." + System.currentTimeMillis());
   } catch (IOException e) {
  LOG.error(e.getMessage(), e);
  e.printStackTrace();
   }
{code}
File metadata\part-0:

 

 
{code:java}
{"class":"org.apache.spark.ml.classification.RandomForestClassificationModel",
"timestamp":1519136783983,"sparkVersion":"2.2.1","uid":"rfc_7c7e84ce7488",
"paramMap":{"featureSubsetStrategy":"auto","cacheNodeIds":false,"impurity":"gini",
"checkpointInterval":10,

"numTrees":20,"maxDepth":5,

"probabilityCol":"probability","labelCol":"label","featuresCol":"features",
"maxMemoryInMB":256,"minInstancesPerNode":1,"subsamplingRate":1.0,
"rawPredictionCol":"rawPrediction","predictionCol":"prediction","maxBins":32,
"minInfoGain":0.0,"seed":-491520797},"numFeatures":1354,"numClasses":2,

"numTrees":20}
{code}
should be:
{code:java}
"numTrees":100,"maxDepth":30,{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23377) Bucketizer with multiple columns persistence bug

2018-02-20 Thread Dongjoon Hyun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-23377:
--
Fix Version/s: (was: 2.3.1)
   2.3.0

> Bucketizer with multiple columns persistence bug
> 
>
> Key: SPARK-23377
> URL: https://issues.apache.org/jira/browse/SPARK-23377
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 2.3.0
>Reporter: Bago Amirbekian
>Assignee: Liang-Chi Hsieh
>Priority: Blocker
> Fix For: 2.3.0, 2.4.0
>
>
> A Bucketizer with multiple input/output columns get "inputCol" set to the 
> default value on write -> read which causes it to throw an error on 
> transform. Here's an example.
> {code:java}
> import org.apache.spark.ml.feature._
> val splits = Array(Double.NegativeInfinity, 0, 10, 100, 
> Double.PositiveInfinity)
> val bucketizer = new Bucketizer()
>   .setSplitsArray(Array(splits, splits))
>   .setInputCols(Array("foo1", "foo2"))
>   .setOutputCols(Array("bar1", "bar2"))
> val data = Seq((1.0, 2.0), (10.0, 100.0), (101.0, -1.0)).toDF("foo1", "foo2")
> bucketizer.transform(data)
> val path = "/temp/bucketrizer-persist-test"
> bucketizer.write.overwrite.save(path)
> val bucketizerAfterRead = Bucketizer.read.load(path)
> println(bucketizerAfterRead.isDefined(bucketizerAfterRead.outputCol))
> // This line throws an error because "outputCol" is set
> bucketizerAfterRead.transform(data)
> {code}
> And the trace:
> {code:java}
> java.lang.IllegalArgumentException: Bucketizer bucketizer_6f0acc3341f7 has 
> the inputCols Param set for multi-column transform. The following Params are 
> not applicable and should not be set: outputCol.
>   at 
> org.apache.spark.ml.param.ParamValidators$.checkExclusiveParams$1(params.scala:300)
>   at 
> org.apache.spark.ml.param.ParamValidators$.checkSingleVsMultiColumnParams(params.scala:314)
>   at 
> org.apache.spark.ml.feature.Bucketizer.transformSchema(Bucketizer.scala:189)
>   at 
> org.apache.spark.ml.feature.Bucketizer.transform(Bucketizer.scala:141)
>   at 
> line251821108a8a433da484ee31f166c83725.$read$$iw$$iw$$iw$$iw$$iw$$iw.(command-6079631:17)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-23470:
-
Description: 
I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" when 
accessing All Jobs page. The stack dump says it was running 
"org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".

{code}
"SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
runnable [0x7fc0ce9f8000]
   java.lang.Thread.State: RUNNABLE
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
 Source)
at java.util.TimSort.binarySort(TimSort.java:296)
at java.util.TimSort.sort(TimSort.java:239)
at java.util.Arrays.sort(Arrays.java:1512)
at java.util.ArrayList.sort(ArrayList.java:1460)
at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
at 
java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
at 
java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
at 
java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
at 
org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
at 
org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
at 
org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
at 
org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
at 
org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
at 
org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
at 
org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
at 
org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
{code}

According to the heap dump, there are 954 JobDataWrapper and 54690 
StageDataWrapper. It's obvious that the UI will be slow since we need to sort 
54690 items for 954 jobs.


  was:
I was testing 2.3.0 RC3 and f

[jira] [Resolved] (SPARK-23316) AnalysisException after max iteration reached for IN query

2018-02-20 Thread Dongjoon Hyun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-23316.
---
Resolution: Fixed

Hi, [~smilegator].
Could you update the assignee with [~bograd]?

> AnalysisException after max iteration reached for IN query
> --
>
> Key: SPARK-23316
> URL: https://issues.apache.org/jira/browse/SPARK-23316
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0, 2.4.0
>Reporter: Bogdan Raducanu
>Priority: Major
> Fix For: 2.3.0
>
>
> Query to reproduce:
> {code:scala}
> spark.range(10).where("(id,id) in (select id, null from range(3))").show
> {code}
> {code}
> 18/02/02 11:32:31 WARN BaseSessionStateBuilder$$anon$1: Max iterations (100) 
> reached for batch Resolution
> org.apache.spark.sql.AnalysisException: cannot resolve '(named_struct('id', 
> `id`, 'id', `id`) IN (listquery()))' due to data type mismatch:
> The data type of one or more elements in the left hand side of an IN subquery
> is not compatible with the data type of the output of the subquery
> Mismatched columns:
> []
> Left side:
> [bigint, bigint].
> Right side:
> [bigint, bigint].;;
> {code}
> The error message includes the last plan which contains ~100 useless Projects.
> Does not happen in branch-2.2.
> It has something to do with TypeCoercion, it is doing a futile attempt  to 
> change nullability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23316) AnalysisException after max iteration reached for IN query

2018-02-20 Thread Dongjoon Hyun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-23316:
--
Fix Version/s: 2.3.0

> AnalysisException after max iteration reached for IN query
> --
>
> Key: SPARK-23316
> URL: https://issues.apache.org/jira/browse/SPARK-23316
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0, 2.4.0
>Reporter: Bogdan Raducanu
>Priority: Major
> Fix For: 2.3.0
>
>
> Query to reproduce:
> {code:scala}
> spark.range(10).where("(id,id) in (select id, null from range(3))").show
> {code}
> {code}
> 18/02/02 11:32:31 WARN BaseSessionStateBuilder$$anon$1: Max iterations (100) 
> reached for batch Resolution
> org.apache.spark.sql.AnalysisException: cannot resolve '(named_struct('id', 
> `id`, 'id', `id`) IN (listquery()))' due to data type mismatch:
> The data type of one or more elements in the left hand side of an IN subquery
> is not compatible with the data type of the output of the subquery
> Mismatched columns:
> []
> Left side:
> [bigint, bigint].
> Right side:
> [bigint, bigint].;;
> {code}
> The error message includes the last plan which contains ~100 useless Projects.
> Does not happen in branch-2.2.
> It has something to do with TypeCoercion, it is doing a futile attempt  to 
> change nullability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370582#comment-16370582
 ] 

Apache Spark commented on SPARK-23468:
--

User 'vanzin' has created a pull request for this issue:
https://github.com/apache/spark/pull/20643

> Failure to authenticate with old shuffle service
> 
>
> Key: SPARK-23468
> URL: https://issues.apache.org/jira/browse/SPARK-23468
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Marcelo Vanzin
>Priority: Minor
>
> I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
> with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
> every once in a while:
> {noformat}
> org.apache.spark.SparkException: Unable to register with external shuffle 
> server due to : java.lang.RuntimeException: 
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
> at 
> org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
> at 
> org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
> at 
> org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
> {noformat}
> This is a regression in 2.3 and I have a fix for it as part of the fix for 
> SPARK-23361. I'm filing this separately so that this particular fix is 
> backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23468:


Assignee: Apache Spark

> Failure to authenticate with old shuffle service
> 
>
> Key: SPARK-23468
> URL: https://issues.apache.org/jira/browse/SPARK-23468
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Marcelo Vanzin
>Assignee: Apache Spark
>Priority: Minor
>
> I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
> with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
> every once in a while:
> {noformat}
> org.apache.spark.SparkException: Unable to register with external shuffle 
> server due to : java.lang.RuntimeException: 
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
> at 
> org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
> at 
> org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
> at 
> org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
> {noformat}
> This is a regression in 2.3 and I have a fix for it as part of the fix for 
> SPARK-23361. I'm filing this separately so that this particular fix is 
> backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23468:


Assignee: (was: Apache Spark)

> Failure to authenticate with old shuffle service
> 
>
> Key: SPARK-23468
> URL: https://issues.apache.org/jira/browse/SPARK-23468
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: Marcelo Vanzin
>Priority: Minor
>
> I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
> with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
> every once in a while:
> {noformat}
> org.apache.spark.SparkException: Unable to register with external shuffle 
> server due to : java.lang.RuntimeException: 
> javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
> violation. Mismatched response.
> at 
> org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
> at 
> org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
> at 
> org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
> at 
> org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
> at 
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
> at 
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
> {noformat}
> This is a regression in 2.3 and I have a fix for it as part of the fix for 
> SPARK-23361. I'm filing this separately so that this particular fix is 
> backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23410) Unable to read jsons in charset different from UTF-8

2018-02-20 Thread Dongjoon Hyun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370580#comment-16370580
 ] 

Dongjoon Hyun commented on SPARK-23410:
---

I removed the target version, 2.3.0, from here.

> Unable to read jsons in charset different from UTF-8
> 
>
> Key: SPARK-23410
> URL: https://issues.apache.org/jira/browse/SPARK-23410
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Priority: Major
> Attachments: utf16WithBOM.json
>
>
> Currently the Json Parser is forced to read json files in UTF-8. Such 
> behavior breaks backward compatibility with Spark 2.2.1 and previous versions 
> that can read json files in UTF-16, UTF-32 and other encodings due to using 
> of the auto detection mechanism of the jackson library. Need to give back to 
> users possibility to read json files in specified charset and/or detect 
> charset automatically as it was before.    



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23410) Unable to read jsons in charset different from UTF-8

2018-02-20 Thread Dongjoon Hyun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-23410:
--
Target Version/s:   (was: 2.3.0)

> Unable to read jsons in charset different from UTF-8
> 
>
> Key: SPARK-23410
> URL: https://issues.apache.org/jira/browse/SPARK-23410
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Priority: Major
> Attachments: utf16WithBOM.json
>
>
> Currently the Json Parser is forced to read json files in UTF-8. Such 
> behavior breaks backward compatibility with Spark 2.2.1 and previous versions 
> that can read json files in UTF-16, UTF-32 and other encodings due to using 
> of the auto detection mechanism of the jackson library. Need to give back to 
> users possibility to read json files in specified charset and/or detect 
> charset automatically as it was before.    



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370532#comment-16370532
 ] 

Marcelo Vanzin commented on SPARK-23470:


Sure, I'll take a look in the PM.

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.s

[jira] [Updated] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-23470:
-
Target Version/s: 2.3.0

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>

[jira] [Commented] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370531#comment-16370531
 ] 

Shixiong Zhu commented on SPARK-23470:
--

[~vanzin] could you make a PR to fix it? I can help you test the patch.

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>

[jira] [Updated] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-23470:
-
Priority: Blocker  (was: Major)

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Blocker
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:111

[jira] [Commented] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370508#comment-16370508
 ] 

Marcelo Vanzin commented on SPARK-23470:


It should be pretty easy to make {{lastStageNameAndDescription}} faster. It 
doesn't need the last attempt, just any attempt of the stage, since the name 
and description should be the same. So it could just call 
{{store.stageAttempt(stageId, 0)}} instead of {{lastStageAttempt}}.

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHand

[jira] [Commented] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370496#comment-16370496
 ] 

Shixiong Zhu commented on SPARK-23470:
--

[~vanzin] [~cloud_fan]

> org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
> --
>
> Key: SPARK-23470
> URL: https://issues.apache.org/jira/browse/SPARK-23470
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in 
> Spark UI. The stack dump says it was running 
> "org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".
> {code}
> "SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
> runnable [0x7fc0ce9f8000]
>java.lang.Thread.State: RUNNABLE
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
>  Source)
>   at java.util.TimSort.binarySort(TimSort.java:296)
>   at java.util.TimSort.sort(TimSort.java:239)
>   at java.util.Arrays.sort(Arrays.java:1512)
>   at java.util.ArrayList.sort(ArrayList.java:1460)
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
>   at 
> java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
>   at 
> java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
>   at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
>   at 
> org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
>   at 
> org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
>   at 
> org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
>   at 
> org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
>   at 
> org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
>   at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
>   at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
>   at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
>   at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.handler.Con

[jira] [Created] (SPARK-23470) org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow

2018-02-20 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created SPARK-23470:


 Summary: 
org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription is too slow
 Key: SPARK-23470
 URL: https://issues.apache.org/jira/browse/SPARK-23470
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 2.3.0
Reporter: Shixiong Zhu


I was testing 2.3.0 RC3 and found that it's easy to hit "read timeout" in Spark 
UI. The stack dump says it was running 
"org.apache.spark.ui.jobs.ApiHelper.lastStageNameAndDescription".

{code}
"SparkUI-59" #59 daemon prio=5 os_prio=0 tid=0x7fc15b0a3000 nid=0x8dc 
runnable [0x7fc0ce9f8000]
   java.lang.Thread.State: RUNNABLE
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$2(InMemoryStore.java:214)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$36/1834982692.compare(Unknown
 Source)
at java.util.TimSort.binarySort(TimSort.java:296)
at java.util.TimSort.sort(TimSort.java:239)
at java.util.Arrays.sort(Arrays.java:1512)
at java.util.ArrayList.sort(ArrayList.java:1460)
at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:387)
at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
at 
java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:210)
at 
java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
at 
java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300)
at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
at 
org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.hasNext(InMemoryStore.java:278)
at 
org.apache.spark.status.AppStatusStore.lastStageAttempt(AppStatusStore.scala:101)
at 
org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
at 
org.apache.spark.ui.jobs.ApiHelper$$anonfun$38.apply(StagePage.scala:1014)
at 
org.apache.spark.status.AppStatusStore.asOption(AppStatusStore.scala:408)
at 
org.apache.spark.ui.jobs.ApiHelper$.lastStageNameAndDescription(StagePage.scala:1014)
at 
org.apache.spark.ui.jobs.JobDataSource.org$apache$spark$ui$jobs$JobDataSource$$jobRow(AllJobsPage.scala:434)
at 
org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
at 
org.apache.spark.ui.jobs.JobDataSource$$anonfun$24.apply(AllJobsPage.scala:412)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.ui.jobs.JobDataSource.(AllJobsPage.scala:412)
at org.apache.spark.ui.jobs.JobPagedTable.(AllJobsPage.scala:504)
at org.apache.spark.ui.jobs.AllJobsPage.jobsTable(AllJobsPage.scala:246)
at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:295)
at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
at org.apache.spark.ui.WebUI$$anonfun$3.apply(WebUI.scala:98)
at org.apache.spark.ui.JettyUtils$$anon$3.doGet(JettyUtils.scala:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
{code}

According to the he

[jira] [Updated] (SPARK-23469) HashingTF should use corrected MurmurHash3 implementation

2018-02-20 Thread Joseph K. Bradley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph K. Bradley updated SPARK-23469:
--
Description: 
[SPARK-23381] added a corrected MurmurHash3 implementation but left the old 
implementation alone.  In Spark 2.3 and earlier, HashingTF will use the old 
implementation.  (We should not backport a fix for HashingTF since it would be 
a major change of behavior.)  But we should correct HashingTF in Spark 2.4; 
this JIRA is for tracking this fix.
* Update HashingTF to use new implementation of MurmurHash3
* Ensure backwards compatibility for ML persistence by having HashingTF use the 
old MurmurHash3 when a model from Spark 2.3 or earlier is loaded.  We can add a 
Param to allow this.

Also, HashingTF still calls into the old spark.mllib.feature.HashingTF, so I 
recommend we first migrate the code to spark.ml: [SPARK-21748].  We can leave 
spark.mllib alone and just fix MurmurHash3 in spark.ml.

  was:
[SPARK-23381] added a corrected MurmurHash3 implementation but left the old 
implementation alone.  In Spark 2.3 and earlier, HashingTF will use the old 
implementation.  (We should not backport a fix for HashingTF since it would be 
a major change of behavior.)  But we should correct HashingTF in Spark 2.4; 
this JIRA is for tracking this fix.
* Update HashingTF to use new implementation of MurmurHash3
* Ensure backwards compatibility for ML persistence by having HashingTF use the 
old MurmurHash3 when a model from Spark 2.3 or earlier is loaded.  We can add a 
Param to allow this.

Also, HashingTF still calls into the old spark.mllib.feature.HashingTF, so I 
recommend we first migrate the code to spark.ml.  We can leave spark.mllib 
alone and just fix MurmurHash3 in spark.ml.  I will link a JIRA for this 
migration.


> HashingTF should use corrected MurmurHash3 implementation
> -
>
> Key: SPARK-23469
> URL: https://issues.apache.org/jira/browse/SPARK-23469
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 2.4.0
>Reporter: Joseph K. Bradley
>Priority: Major
>
> [SPARK-23381] added a corrected MurmurHash3 implementation but left the old 
> implementation alone.  In Spark 2.3 and earlier, HashingTF will use the old 
> implementation.  (We should not backport a fix for HashingTF since it would 
> be a major change of behavior.)  But we should correct HashingTF in Spark 
> 2.4; this JIRA is for tracking this fix.
> * Update HashingTF to use new implementation of MurmurHash3
> * Ensure backwards compatibility for ML persistence by having HashingTF use 
> the old MurmurHash3 when a model from Spark 2.3 or earlier is loaded.  We can 
> add a Param to allow this.
> Also, HashingTF still calls into the old spark.mllib.feature.HashingTF, so I 
> recommend we first migrate the code to spark.ml: [SPARK-21748].  We can leave 
> spark.mllib alone and just fix MurmurHash3 in spark.ml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23469) HashingTF should use corrected MurmurHash3 implementation

2018-02-20 Thread Joseph K. Bradley (JIRA)
Joseph K. Bradley created SPARK-23469:
-

 Summary: HashingTF should use corrected MurmurHash3 implementation
 Key: SPARK-23469
 URL: https://issues.apache.org/jira/browse/SPARK-23469
 Project: Spark
  Issue Type: Bug
  Components: ML
Affects Versions: 2.4.0
Reporter: Joseph K. Bradley


[SPARK-23381] added a corrected MurmurHash3 implementation but left the old 
implementation alone.  In Spark 2.3 and earlier, HashingTF will use the old 
implementation.  (We should not backport a fix for HashingTF since it would be 
a major change of behavior.)  But we should correct HashingTF in Spark 2.4; 
this JIRA is for tracking this fix.
* Update HashingTF to use new implementation of MurmurHash3
* Ensure backwards compatibility for ML persistence by having HashingTF use the 
old MurmurHash3 when a model from Spark 2.3 or earlier is loaded.  We can add a 
Param to allow this.

Also, HashingTF still calls into the old spark.mllib.feature.HashingTF, so I 
recommend we first migrate the code to spark.ml.  We can leave spark.mllib 
alone and just fix MurmurHash3 in spark.ml.  I will link a JIRA for this 
migration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2018-02-20 Thread Cody Koeninger (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370460#comment-16370460
 ] 

Cody Koeninger commented on SPARK-18057:


My guess is that DStream based integrations aren't really on committer's minds.

Happy to help, given clear direction on what artifact naming is likely
to be accepted.



> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>Priority: Major
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23468) Failure to authenticate with old shuffle service

2018-02-20 Thread Marcelo Vanzin (JIRA)
Marcelo Vanzin created SPARK-23468:
--

 Summary: Failure to authenticate with old shuffle service
 Key: SPARK-23468
 URL: https://issues.apache.org/jira/browse/SPARK-23468
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: Marcelo Vanzin


I ran into this while testing a fix for SPARK-23361. Things seem to work fine 
with a 2.x shuffle service, but with a 1.6 shuffle service I get this error 
every once in a while:

{noformat}
org.apache.spark.SparkException: Unable to register with external shuffle 
server due to : java.lang.RuntimeException: javax.security.sasl.SaslException: 
DIGEST-MD5: digest response format violation. Mismatched response.
at 
org.spark-project.guava.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.spark.network.sasl.SparkSaslServer.response(SparkSaslServer.java:121)
at 
org.apache.spark.network.sasl.SaslRpcHandler.receive(SaslRpcHandler.java:101)
at 
org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:154)
at 
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:102)
at 
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
at 
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
{noformat}

This is a regression in 2.3 and I have a fix for it as part of the fix for 
SPARK-23361. I'm filing this separately so that this particular fix is 
backported to 2.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18057) Update structured streaming kafka from 10.0.1 to 10.2.0

2018-02-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SPARK-18057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370434#comment-16370434
 ] 

Sönke Liebau commented on SPARK-18057:
--

The upcoming release of Kafka 1.1 will support delegation tokens, which would 
be a very nice addition to the Spark Streaming Kafka connector - is someone 
actively looking at this currently?


> Update structured streaming kafka from 10.0.1 to 10.2.0
> ---
>
> Key: SPARK-18057
> URL: https://issues.apache.org/jira/browse/SPARK-18057
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Reporter: Cody Koeninger
>Priority: Major
>
> There are a couple of relevant KIPs here, 
> https://archive.apache.org/dist/kafka/0.10.1.0/RELEASE_NOTES.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23456) Turn on `native` ORC implementation by default

2018-02-20 Thread Xiao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li resolved SPARK-23456.
-
   Resolution: Fixed
 Assignee: Dongjoon Hyun
Fix Version/s: 2.4.0

> Turn on `native` ORC implementation by default
> --
>
> Key: SPARK-23456
> URL: https://issues.apache.org/jira/browse/SPARK-23456
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-21783) Turn on ORC filter push-down by default

2018-02-20 Thread Xiao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li resolved SPARK-21783.
-
   Resolution: Fixed
Fix Version/s: 2.4.0

> Turn on ORC filter push-down by default
> ---
>
> Key: SPARK-21783
> URL: https://issues.apache.org/jira/browse/SPARK-21783
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.2.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
> Fix For: 2.4.0
>
>
> Like Parquet (SPARK-9207), it would be great to turn on ORC option, too.
> This option was turned off by default from the begining, SPARK-2883
> - 
> https://github.com/apache/spark/commit/aa31e431fc09f0477f1c2351c6275769a31aca90#diff-41ef65b9ef5b518f77e2a03559893f4dR149



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Tom Wadeson (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370142#comment-16370142
 ] 

Tom Wadeson edited comment on SPARK-21529 at 2/20/18 3:32 PM:
--

Thanks for the update, [~teabot]. I'll keep you posted if we come up with a 
clean fix.


was (Author: tomwadeson):
Thanks for the update. I'll keep you posted if we come up with a clean fix.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:79)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$g

[jira] [Commented] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Tom Wadeson (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370142#comment-16370142
 ] 

Tom Wadeson commented on SPARK-21529:
-

Thanks for the update. I'll keep you posted if we come up with a clean fix.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:79)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable(HiveExternalCatalog.scala:117)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:648)
> at 
> org.

[jira] [Commented] (SPARK-23464) MesosClusterScheduler double-escapes parameters to bash command

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370132#comment-16370132
 ] 

Apache Spark commented on SPARK-23464:
--

User 'krcz' has created a pull request for this issue:
https://github.com/apache/spark/pull/20641

> MesosClusterScheduler double-escapes parameters to bash command
> ---
>
> Key: SPARK-23464
> URL: https://issues.apache.org/jira/browse/SPARK-23464
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.2.0
> Environment: Spark 2.2.0 with Mesosphere patches (but the problem 
> exists in main repo)
> DC/OS 1.9.5
>Reporter: Marcin Kurczych
>Priority: Major
>
> Parameters passed to driver launching command in Mesos container are escaped 
> using _shellEscape_ function. In SPARK-18114 additional wrapping in double 
> quotes has been introduced. This cancels out quoting done by _shellEscape_ 
> and makes in unable to run tasks with whitespaces in parameters, as they are 
> interpreted as additional parameters to in-container spark-submit.
> This is how parameter passed to in-container spark-submit looks like now:
> {code:java}
> --conf "spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another""
> {code}
> This is how they look after reverting SPARK-18114 related commit:
> {code:java}
> --conf spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another"
> {code}
> In current version submitting job with such extraJavaOptions causes following 
> error:
> {code:java}
> Error: Unrecognized option: -Dbar=another
> Usage: spark-submit [options]  [app arguments]
> Usage: spark-submit --kill [submission ID] --master [spark://...]
> Usage: spark-submit --status [submission ID] --master [spark://...]
> Usage: spark-submit run-example [options] example-class [example args]
> Options:
>   --master MASTER_URL spark://host:port, mesos://host:port, yarn, or 
> local.
>   --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally 
> ("client") or
>   on one of the worker machines inside the 
> cluster ("cluster")
>   (Default: client).
> (... further spark-submit help ...)
> {code}
> Reverting SPARK-18114 is the solution to the issue. I can create a 
> pull-request in GitHub. I thought about adding unit tests for that, buth 
> methods generating driver launch command are private.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23464) MesosClusterScheduler double-escapes parameters to bash command

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23464:


Assignee: Apache Spark

> MesosClusterScheduler double-escapes parameters to bash command
> ---
>
> Key: SPARK-23464
> URL: https://issues.apache.org/jira/browse/SPARK-23464
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.2.0
> Environment: Spark 2.2.0 with Mesosphere patches (but the problem 
> exists in main repo)
> DC/OS 1.9.5
>Reporter: Marcin Kurczych
>Assignee: Apache Spark
>Priority: Major
>
> Parameters passed to driver launching command in Mesos container are escaped 
> using _shellEscape_ function. In SPARK-18114 additional wrapping in double 
> quotes has been introduced. This cancels out quoting done by _shellEscape_ 
> and makes in unable to run tasks with whitespaces in parameters, as they are 
> interpreted as additional parameters to in-container spark-submit.
> This is how parameter passed to in-container spark-submit looks like now:
> {code:java}
> --conf "spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another""
> {code}
> This is how they look after reverting SPARK-18114 related commit:
> {code:java}
> --conf spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another"
> {code}
> In current version submitting job with such extraJavaOptions causes following 
> error:
> {code:java}
> Error: Unrecognized option: -Dbar=another
> Usage: spark-submit [options]  [app arguments]
> Usage: spark-submit --kill [submission ID] --master [spark://...]
> Usage: spark-submit --status [submission ID] --master [spark://...]
> Usage: spark-submit run-example [options] example-class [example args]
> Options:
>   --master MASTER_URL spark://host:port, mesos://host:port, yarn, or 
> local.
>   --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally 
> ("client") or
>   on one of the worker machines inside the 
> cluster ("cluster")
>   (Default: client).
> (... further spark-submit help ...)
> {code}
> Reverting SPARK-18114 is the solution to the issue. I can create a 
> pull-request in GitHub. I thought about adding unit tests for that, buth 
> methods generating driver launch command are private.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23464) MesosClusterScheduler double-escapes parameters to bash command

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23464:


Assignee: (was: Apache Spark)

> MesosClusterScheduler double-escapes parameters to bash command
> ---
>
> Key: SPARK-23464
> URL: https://issues.apache.org/jira/browse/SPARK-23464
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.2.0
> Environment: Spark 2.2.0 with Mesosphere patches (but the problem 
> exists in main repo)
> DC/OS 1.9.5
>Reporter: Marcin Kurczych
>Priority: Major
>
> Parameters passed to driver launching command in Mesos container are escaped 
> using _shellEscape_ function. In SPARK-18114 additional wrapping in double 
> quotes has been introduced. This cancels out quoting done by _shellEscape_ 
> and makes in unable to run tasks with whitespaces in parameters, as they are 
> interpreted as additional parameters to in-container spark-submit.
> This is how parameter passed to in-container spark-submit looks like now:
> {code:java}
> --conf "spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another""
> {code}
> This is how they look after reverting SPARK-18114 related commit:
> {code:java}
> --conf spark.executor.extraJavaOptions="-Dfoo=\"first value\" -Dbar=another"
> {code}
> In current version submitting job with such extraJavaOptions causes following 
> error:
> {code:java}
> Error: Unrecognized option: -Dbar=another
> Usage: spark-submit [options]  [app arguments]
> Usage: spark-submit --kill [submission ID] --master [spark://...]
> Usage: spark-submit --status [submission ID] --master [spark://...]
> Usage: spark-submit run-example [options] example-class [example args]
> Options:
>   --master MASTER_URL spark://host:port, mesos://host:port, yarn, or 
> local.
>   --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally 
> ("client") or
>   on one of the worker machines inside the 
> cluster ("cluster")
>   (Default: client).
> (... further spark-submit help ...)
> {code}
> Reverting SPARK-18114 is the solution to the issue. I can create a 
> pull-request in GitHub. I thought about adding unit tests for that, buth 
> methods generating driver launch command are private.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors

2018-02-20 Thread Igor Berman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Berman resolved SPARK-23423.
-
Resolution: Duplicate

> Application declines any offers when killed+active executors rich 
> spark.dynamicAllocation.maxExecutors
> --
>
> Key: SPARK-23423
> URL: https://issues.apache.org/jira/browse/SPARK-23423
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Spark Core
>Affects Versions: 2.2.1
>Reporter: Igor Berman
>Priority: Major
>  Labels: Mesos, dynamic_allocation
>
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend 
> when running on Mesos with dynamic allocation on and limiting number of max 
> executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource 
> consumption(with some idle times in between), due to dyn.allocation it 
> receives offers and then releases them after current chunk of work processed.
> Since at 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L573]
>  the backend compares numExecutors < executorLimit and 
> numExecutors is defined as slaves.values.map(_.taskIDs.size).sum and slaves 
> holds all slaves ever "met", i.e. both active and killed (see comment 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L122)]
>  
> On the other hand, number of taskIds should be updated due to statusUpdate, 
> but suppose this update is lost(actually I don't see logs of 'is now 
> TASK_KILLED') so this number of executors might be wrong
>  
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation 
> enabled") {
>   setBackend(Map(
> "spark.dynamicAllocation.maxExecutors" -> "1",
> "spark.dynamicAllocation.enabled" -> "true",
> "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
>  
>  
> Workaround: Don't set maxExecutors with dynamicAllocation on
>  
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and 
> probably can advice something([~vanzin], [~skonto], [~susanxhuynh])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors

2018-02-20 Thread Igor Berman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370114#comment-16370114
 ] 

Igor Berman commented on SPARK-23423:
-

[~skonto] I've investigated failures a bit more so found 2 reasons in our case: 
OOM errors that happen sporadically(not sure why exactly),  jdwp port 
collision(probably mesos tried to start 2nd executor on same mesos slave for 
some application)

We statically configure for every application jdwp port so that we can debug if 
needed executor's codeprobably this is main reason in our case, maybe I 
need to disable permitting to start >1 executors on every slave for some 
application

 

I'll close this Jira as duplicate in favor to 19755. I've also prepared PR to 
fix 19755 see https://github.com/apache/spark/pull/20640

> Application declines any offers when killed+active executors rich 
> spark.dynamicAllocation.maxExecutors
> --
>
> Key: SPARK-23423
> URL: https://issues.apache.org/jira/browse/SPARK-23423
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Spark Core
>Affects Versions: 2.2.1
>Reporter: Igor Berman
>Priority: Major
>  Labels: Mesos, dynamic_allocation
>
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend 
> when running on Mesos with dynamic allocation on and limiting number of max 
> executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource 
> consumption(with some idle times in between), due to dyn.allocation it 
> receives offers and then releases them after current chunk of work processed.
> Since at 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L573]
>  the backend compares numExecutors < executorLimit and 
> numExecutors is defined as slaves.values.map(_.taskIDs.size).sum and slaves 
> holds all slaves ever "met", i.e. both active and killed (see comment 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L122)]
>  
> On the other hand, number of taskIds should be updated due to statusUpdate, 
> but suppose this update is lost(actually I don't see logs of 'is now 
> TASK_KILLED') so this number of executors might be wrong
>  
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation 
> enabled") {
>   setBackend(Map(
> "spark.dynamicAllocation.maxExecutors" -> "1",
> "spark.dynamicAllocation.enabled" -> "true",
> "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
>  
>  
> Workaround: Don't set maxExecutors with dynamicAllocation on
>  
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and 
> probably can advice something([~vanzin], [~skonto], [~susanxhuynh])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370100#comment-16370100
 ] 

Elliot West edited comment on SPARK-21529 at 2/20/18 2:44 PM:
--

Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to restructure the data, removing the union 
type for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

If you can improve on the above, we'd be keen to hear of your experiences and 
solutions.


was (Author: teabot):
Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to restructure the data, removing the union 
type for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

If you can improve on the above, then we'd be keen to hear of your experiences 
and solutions.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.

[jira] [Comment Edited] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370100#comment-16370100
 ] 

Elliot West edited comment on SPARK-21529 at 2/20/18 2:44 PM:
--

Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to restructure the data, removing the union 
type for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

If you can improve on the above, then we'd be keen to hear of your experiences 
and solutions.


was (Author: teabot):
Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to restructure the data, removing the union 
type for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.

[jira] [Comment Edited] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370100#comment-16370100
 ] 

Elliot West edited comment on SPARK-21529 at 2/20/18 2:43 PM:
--

Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to restructure the data, removing the union 
type for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.


was (Author: teabot):
Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to reformat the data, removing the union type 
for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.h

[jira] [Comment Edited] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370100#comment-16370100
 ] 

Elliot West edited comment on SPARK-21529 at 2/20/18 2:34 PM:
--

Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read union types in Hive (HIVE-15434).
 * We run a downstream Hive ETL to reformat the data, removing the union type 
for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.


was (Author: teabot):
Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read unions in Hive (HIVE-15434).
 * We run a downstream Hive ETL to reformat the data, removing the union type 
for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.hive.clie

[jira] [Commented] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370100#comment-16370100
 ] 

Elliot West commented on SPARK-21529:
-

Yes [~tomwadeson], but not in a nice way:
 * We implemented a bunch of UDFs to read unions in Hive (HIVE-15434).
 * We run a downstream Hive ETL to reformat the data, removing the union type 
for downstream Spark jobs.
 * We've prohibited future use of the union type by data producers, because 
it's so painful, and not supported by most data processing frameworks.

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:79)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveEx

[jira] [Updated] (SPARK-23448) Dataframe returns wrong result when column don't respect datatype

2018-02-20 Thread Liang-Chi Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang-Chi Hsieh updated SPARK-23448:

Component/s: (was: Spark Core)
 SQL

> Dataframe returns wrong result when column don't respect datatype
> -
>
> Key: SPARK-23448
> URL: https://issues.apache.org/jira/browse/SPARK-23448
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2
> Environment: Local
>Reporter: Ahmed ZAROUI
>Priority: Major
>
> I have the following json file that contains some noisy data(String instead 
> of Array):
>  
> {code:java}
> {"attr1":"val1","attr2":"[\"val2\"]"}
> {"attr1":"val1","attr2":["val2"]}
> {code}
> And i need to specify schema programatically like this:
>  
> {code:java}
> implicit val spark = SparkSession
>   .builder()
>   .master("local[*]")
>   .config("spark.ui.enabled", false)
>   .config("spark.sql.caseSensitive", "True")
>   .getOrCreate()
> import spark.implicits._
> val schema = StructType(
>   Seq(StructField("attr1", StringType, true),
>   StructField("attr2", ArrayType(StringType, true), true)))
> spark.read.schema(schema).json(input).collect().foreach(println)
> {code}
> The result given by this code is:
> {code:java}
> [null,null]
> [val1,WrappedArray(val2)]
> {code}
> Instead of putting null in corrupted column, all columns of the first message 
> are null
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-10912) Improve Spark metrics executor.filesystem

2018-02-20 Thread Gil Vernik (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370089#comment-16370089
 ] 

Gil Vernik commented on SPARK-10912:


This patch need to be generic and doesn't include "s3a" in the code, but rather 
take the value from configuration. This way other connectors, will benefit from 
this patch as well. [~srowen] how is this sounds?

> Improve Spark metrics executor.filesystem
> -
>
> Key: SPARK-10912
> URL: https://issues.apache.org/jira/browse/SPARK-10912
> Project: Spark
>  Issue Type: Improvement
>  Components: Deploy
>Affects Versions: 1.5.0
>Reporter: Yongjia Wang
>Priority: Minor
> Attachments: s3a_metrics.patch
>
>
> In org.apache.spark.executor.ExecutorSource it has 2 filesystem metrics: 
> "hdfs" and "file". I started using s3 as the persistent storage with Spark 
> standalone cluster in EC2, and s3 read/write metrics do not appear anywhere. 
> The 'file' metric appears to be only for driver reading local file, it would 
> be nice to also report shuffle read/write metrics, so it can help with 
> optimization.
> I think these 2 things (s3 and shuffle) are very useful and cover all the 
> missing information about Spark IO especially for s3 setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21529) Uniontype not supported when reading from Hive tables.

2018-02-20 Thread Tom Wadeson (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370088#comment-16370088
 ] 

Tom Wadeson commented on SPARK-21529:
-

I've just bumped into this exact issue. Did you manage to find a workaround, 
[~teabot]?

> Uniontype not supported when reading from Hive tables.
> --
>
> Key: SPARK-21529
> URL: https://issues.apache.org/jira/browse/SPARK-21529
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
> Environment: Qubole, DataBricks
>Reporter: Elliot West
>Priority: Major
>  Labels: hive, uniontype
>
> We encounter errors when attempting to read Hive tables whose schema contains 
> the {{uniontype}}. It appears perhaps that Catalyst
> does not support the {{uniontype}} which renders this table unreadable by 
> Spark (2.1). Although, {{uniontype}} is arguably incomplete in the Hive
> query engine, it is fully supported by the storage engine and also the Avro 
> data format, which we use for these tables. Therefore, I believe it is
> a valid, usable type construct that should be supported by Spark.
> We've attempted to read the table as follows:
> {code}
> spark.sql("select * from etl.tbl where acquisition_instant='20170706T133545Z' 
> limit 5").show
> val tblread = spark.read.table("etl.tbl")
> {code}
> But this always results in the same error message. The pertinent error 
> messages are as follows (full stack trace below):
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype ...
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input '<' expecting
> {, '('}
> (line 1, pos 9)
> == SQL ==
> uniontype -^^^
> {code}
> h2. Full stack trace
> {code}
> org.apache.spark.SparkException: Cannot recognize hive type string: 
> uniontype>>,n:boolean,o:string,p:bigint,q:string>,struct,ag:boolean,ah:string,ai:bigint,aj:string>>
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$.fromHiveColumn(HiveClientImpl.scala:800)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11$$anonfun$7.apply(HiveClientImpl.scala:377)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:377)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$11.apply(HiveClientImpl.scala:373)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:373)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:290)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:231)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:230)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:273)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:371)
> at 
> org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:79)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable(HiveExternalCatalog.scala:117)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:648)

[jira] [Comment Edited] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors

2018-02-20 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370074#comment-16370074
 ] 

Stavros Kontopoulos edited comment on SPARK-23423 at 2/20/18 2:04 PM:
--

[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 
actually we wanted to fix this for quite some time. The hardcoded value is not 
the way to go for sure. One note here though.. try to avoid also the failure of 
the executors in the first place, even if you tolerate more failures they 
shouldnt fail. You can allocate different port ranges AFIK for your tasks or by 
default ports are random from what I recall, and the next port is tried if 
there is a collision. I am still curious how they fail... maybe there is an 
issue there as well... [~susanxhuynh]


was (Author: skonto):
[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 
actually we want to fix this for quite some time. The hardcoded value is not 
the way to go for sure. One note here though.. try to avoid also the failure of 
the executors in the first place, even if you tolerate more failures they 
shouldnt fail. You can allocate different port ranges AFIK for your tasks or by 
default ports are random from what I recall, and the next port is tried if 
there is a collision. I am still curious how they fail... maybe there is an 
issue there as well... 

> Application declines any offers when killed+active executors rich 
> spark.dynamicAllocation.maxExecutors
> --
>
> Key: SPARK-23423
> URL: https://issues.apache.org/jira/browse/SPARK-23423
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Spark Core
>Affects Versions: 2.2.1
>Reporter: Igor Berman
>Priority: Major
>  Labels: Mesos, dynamic_allocation
>
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend 
> when running on Mesos with dynamic allocation on and limiting number of max 
> executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource 
> consumption(with some idle times in between), due to dyn.allocation it 
> receives offers and then releases them after current chunk of work processed.
> Since at 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L573]
>  the backend compares numExecutors < executorLimit and 
> numExecutors is defined as slaves.values.map(_.taskIDs.size).sum and slaves 
> holds all slaves ever "met", i.e. both active and killed (see comment 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L122)]
>  
> On the other hand, number of taskIds should be updated due to statusUpdate, 
> but suppose this update is lost(actually I don't see logs of 'is now 
> TASK_KILLED') so this number of executors might be wrong
>  
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation 
> enabled") {
>   setBackend(Map(
> "spark.dynamicAllocation.maxExecutors" -> "1",
> "spark.dynamicAllocation.enabled" -> "true",
> "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
>  
>  
> Workaround: Don't set maxExecutors with dynamicAllocation on
>  
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and 
> probably can advice something([~vanzin], [~skonto], [~susanxhuynh])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors

2018-02-20 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370074#comment-16370074
 ] 

Stavros Kontopoulos edited comment on SPARK-23423 at 2/20/18 2:04 PM:
--

[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 
actually we want to fix this for quite some time. The hardcoded value is not 
the way to go for sure. One note here though.. try to avoid also the failure of 
the executors in the first place, even if you tolerate more failures they 
shouldnt fail. You can allocate different port ranges AFIK for your tasks or by 
default ports are random from what I recall, and the next port is tried if 
there is a collision. I am still curious how they fail... maybe there is an 
issue there as well... 


was (Author: skonto):
[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 
actually we want to fix this for quite some time. The hardcoded value is not 
the way to go for sure. One note here though.. try to avoid also the failure of 
the executors in the first place, even if you tolerate more failures they 
shouldnt fail. You can allocate different port ranges AFIK for your tasks or by 
default ports are random from what I recall. I am still curious how they 
fail... maybe there is an issue there as well... 

> Application declines any offers when killed+active executors rich 
> spark.dynamicAllocation.maxExecutors
> --
>
> Key: SPARK-23423
> URL: https://issues.apache.org/jira/browse/SPARK-23423
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Spark Core
>Affects Versions: 2.2.1
>Reporter: Igor Berman
>Priority: Major
>  Labels: Mesos, dynamic_allocation
>
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend 
> when running on Mesos with dynamic allocation on and limiting number of max 
> executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource 
> consumption(with some idle times in between), due to dyn.allocation it 
> receives offers and then releases them after current chunk of work processed.
> Since at 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L573]
>  the backend compares numExecutors < executorLimit and 
> numExecutors is defined as slaves.values.map(_.taskIDs.size).sum and slaves 
> holds all slaves ever "met", i.e. both active and killed (see comment 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L122)]
>  
> On the other hand, number of taskIds should be updated due to statusUpdate, 
> but suppose this update is lost(actually I don't see logs of 'is now 
> TASK_KILLED') so this number of executors might be wrong
>  
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation 
> enabled") {
>   setBackend(Map(
> "spark.dynamicAllocation.maxExecutors" -> "1",
> "spark.dynamicAllocation.enabled" -> "true",
> "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
>  
>  
> Workaround: Don't set maxExecutors with dynamicAllocation on
>  
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and 
> probably can advice something([~vanzin], [~skonto], [~susanxhuynh])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors

2018-02-20 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370074#comment-16370074
 ] 

Stavros Kontopoulos commented on SPARK-23423:
-

[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 
actually we want to fix this for quite some time. The hardcoded value is not 
the way to go for sure. One note here though.. try to avoid also the failure of 
the executors in the first place, even if you tolerate more failures they 
shouldnt fail. You can allocate different port ranges AFIK for your tasks or by 
default ports are random from what I recall. I am still curious how they 
fail... maybe there is an issue there as well... 

> Application declines any offers when killed+active executors rich 
> spark.dynamicAllocation.maxExecutors
> --
>
> Key: SPARK-23423
> URL: https://issues.apache.org/jira/browse/SPARK-23423
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Spark Core
>Affects Versions: 2.2.1
>Reporter: Igor Berman
>Priority: Major
>  Labels: Mesos, dynamic_allocation
>
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend 
> when running on Mesos with dynamic allocation on and limiting number of max 
> executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource 
> consumption(with some idle times in between), due to dyn.allocation it 
> receives offers and then releases them after current chunk of work processed.
> Since at 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L573]
>  the backend compares numExecutors < executorLimit and 
> numExecutors is defined as slaves.values.map(_.taskIDs.size).sum and slaves 
> holds all slaves ever "met", i.e. both active and killed (see comment 
> [https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L122)]
>  
> On the other hand, number of taskIds should be updated due to statusUpdate, 
> but suppose this update is lost(actually I don't see logs of 'is now 
> TASK_KILLED') so this number of executors might be wrong
>  
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation 
> enabled") {
>   setBackend(Map(
> "spark.dynamicAllocation.maxExecutors" -> "1",
> "spark.dynamicAllocation.enabled" -> "true",
> "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
>  
>  
> Workaround: Don't set maxExecutors with dynamicAllocation on
>  
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and 
> probably can advice something([~vanzin], [~skonto], [~susanxhuynh])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23383) Make a distribution should exit with usage while detecting wrong options

2018-02-20 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-23383.
---
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20571
[https://github.com/apache/spark/pull/20571]

> Make a distribution should exit with usage while detecting wrong options
> 
>
> Key: SPARK-23383
> URL: https://issues.apache.org/jira/browse/SPARK-23383
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 2.2.1
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Minor
> Fix For: 2.4.0
>
>
> {code:java}
> ./dev/make-distribution.sh --name ne-1.0.0-SNAPSHOT xyz --tgz  -Phadoop-2.7
> +++ dirname ./dev/make-distribution.sh
> ++ cd ./dev/..
> ++ pwd
> + SPARK_HOME=/Users/Kent/Documents/spark
> + DISTDIR=/Users/Kent/Documents/spark/dist
> + MAKE_TGZ=false
> + MAKE_PIP=false
> + MAKE_R=false
> + NAME=none
> + MVN=/Users/Kent/Documents/spark/build/mvn
> + ((  5  ))
> + case $1 in
> + NAME=ne-1.0.0-SNAPSHOT
> + shift
> + shift
> + ((  3  ))
> + case $1 in
> + break
> + '[' -z /Users/Kent/.jenv/candidates/java/current ']'
> + '[' -z /Users/Kent/.jenv/candidates/java/current ']'
> ++ command -v git
> + '[' /usr/local/bin/git ']'
> ++ git rev-parse --short HEAD
> + GITREV=98ea6a7
> + '[' '!' -z 98ea6a7 ']'
> + GITREVSTRING=' (git revision 98ea6a7)'
> + unset GITREV
> ++ command -v /Users/Kent/Documents/spark/build/mvn
> + '[' '!' /Users/Kent/Documents/spark/build/mvn ']'
> ++ /Users/Kent/Documents/spark/build/mvn help:evaluate 
> -Dexpression=project.version xyz --tgz -Phadoop-2.7
> ++ grep -v INFO
> ++ tail -n 1
> + VERSION=' -X,--debug Produce execution debug 
> output'
> {code}
> It is better to declare the mistakes and exit with usage



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23383) Make a distribution should exit with usage while detecting wrong options

2018-02-20 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen reassigned SPARK-23383:
-

Assignee: Kent Yao

> Make a distribution should exit with usage while detecting wrong options
> 
>
> Key: SPARK-23383
> URL: https://issues.apache.org/jira/browse/SPARK-23383
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 2.2.1
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Minor
> Fix For: 2.4.0
>
>
> {code:java}
> ./dev/make-distribution.sh --name ne-1.0.0-SNAPSHOT xyz --tgz  -Phadoop-2.7
> +++ dirname ./dev/make-distribution.sh
> ++ cd ./dev/..
> ++ pwd
> + SPARK_HOME=/Users/Kent/Documents/spark
> + DISTDIR=/Users/Kent/Documents/spark/dist
> + MAKE_TGZ=false
> + MAKE_PIP=false
> + MAKE_R=false
> + NAME=none
> + MVN=/Users/Kent/Documents/spark/build/mvn
> + ((  5  ))
> + case $1 in
> + NAME=ne-1.0.0-SNAPSHOT
> + shift
> + shift
> + ((  3  ))
> + case $1 in
> + break
> + '[' -z /Users/Kent/.jenv/candidates/java/current ']'
> + '[' -z /Users/Kent/.jenv/candidates/java/current ']'
> ++ command -v git
> + '[' /usr/local/bin/git ']'
> ++ git rev-parse --short HEAD
> + GITREV=98ea6a7
> + '[' '!' -z 98ea6a7 ']'
> + GITREVSTRING=' (git revision 98ea6a7)'
> + unset GITREV
> ++ command -v /Users/Kent/Documents/spark/build/mvn
> + '[' '!' /Users/Kent/Documents/spark/build/mvn ']'
> ++ /Users/Kent/Documents/spark/build/mvn help:evaluate 
> -Dexpression=project.version xyz --tgz -Phadoop-2.7
> ++ grep -v INFO
> ++ tail -n 1
> + VERSION=' -X,--debug Produce execution debug 
> output'
> {code}
> It is better to declare the mistakes and exit with usage



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20845) Support specification of column names in INSERT INTO

2018-02-20 Thread Mihaly Toth (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370056#comment-16370056
 ] 

Mihaly Toth commented on SPARK-20845:
-

As shown in the links this duplicates SPARK-21548. However, I the wording of 
this issue sounds more descriptive for me. On the other hand the other one has 
PR linked to it. 

To reduce duplication I would propose to close SPARK-21548 and add prior pull 
request https://github.com/apache/spark/pull/18756 to the links section of this 
issue.

> Support specification of column names in INSERT INTO
> 
>
> Key: SPARK-20845
> URL: https://issues.apache.org/jira/browse/SPARK-20845
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.0.0
>Reporter: Josh Rosen
>Priority: Minor
>
> Some databases allow you to specify column names when specifying the target 
> of an INSERT INTO. For example, in SQLite:
> {code}
> sqlite> CREATE TABLE twocolumn (x INT, y INT); INSERT INTO twocolumn(x, y) 
> VALUES (44,51), (NULL,52), (42,53), (45,45)
>...> ;
> sqlite> select * from twocolumn;
> 44|51
> |52
> 42|53
> 45|45
> {code}
> I have a corpus of existing queries of this form which I would like to run on 
> Spark SQL, so I think we should extend our dialect to support this syntax.
> When implementing this, we should make sure to test the following behaviors 
> and corner-cases:
> - Number of columns specified is greater than or less than the number of 
> columns in the table.
> - Specification of repeated columns.
> - Specification of columns which do not exist in the target table.
> - Permute column order instead of using the default order in the table.
> For each of these, we should check how SQLite behaves and should also compare 
> against another database. It looks like T-SQL supports this; see 
> https://technet.microsoft.com/en-us/library/dd776381(v=sql.105).aspx under 
> the "Inserting data that is not in the same order as the table columns" 
> header.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23463) Filter operation fails to handle blank values and evicts rows that even satisfy the filtering condition

2018-02-20 Thread Marco Gaido (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370046#comment-16370046
 ] 

Marco Gaido commented on SPARK-23463:
-

Hi [~m.bakshi11]. The problem is very easy. The column `val` in `df` is a 
string. When you filter it comparing to an integer, it is casted to a integer 
for the comparison. The problem here is that you are thinking that you are 
dealing with doubles, while instead you have strings. You should fix your code 
in oder to parse the column `val` as a double and you won't have this issue 
anymore. Thanks.

> Filter operation fails to handle blank values and evicts rows that even 
> satisfy the filtering condition
> ---
>
> Key: SPARK-23463
> URL: https://issues.apache.org/jira/browse/SPARK-23463
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.2.1
>Reporter: Manan Bakshi
>Priority: Critical
> Attachments: sample
>
>
> Filter operations were updated in Spark 2.2.0. Cost Based Optimizer was 
> introduced to look at the table stats and decide filter selectivity. However, 
> since then, filter has started behaving unexpectedly for blank values. The 
> operation would not only drop columns with blank values but also filter out 
> rows that actually meet the filter criteria.
> Steps to repro
> Consider a simple dataframe with some blank values as below:
> ||dev||val||
> |ALL|0.01|
> |ALL|0.02|
> |ALL|0.004|
> |ALL| |
> |ALL|2.5|
> |ALL|4.5|
> |ALL|45|
> Running a simple filter operation over val column in this dataframe yields 
> unexpected results. For eg. the following query returned an empty dataframe:
> df.filter(df["val"] > 0)
> ||dev||val||
> However, the filter operation works as expected if 0 in filter condition is 
> replaced by float 0.0
> df.filter(df["val"] > 0.0)
> ||dev||val||
> |ALL|0.01|
> |ALL|0.02|
> |ALL|0.004|
> |ALL|2.5|
> |ALL|4.5|
> |ALL|45|
>  
> Note that this bug only exists in Spark 2.2.0 and later. The previous 
> versions filter as expected for both int (0) and float (0.0) values in the 
> filter condition.
> Also, if there are no blank values, the filter operation works as expected 
> for all versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23443) Spark with Glue as external catalog

2018-02-20 Thread Ameen Tayyebi (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370015#comment-16370015
 ] 

Ameen Tayyebi commented on SPARK-23443:
---

Hello,

I have this locally working (just limited read cases) and plan to submit my
first iteration this week.

It would be great to work on this together! Any help would be appreciated :)

Ameen




> Spark with Glue as external catalog
> ---
>
> Key: SPARK-23443
> URL: https://issues.apache.org/jira/browse/SPARK-23443
> Project: Spark
>  Issue Type: New Feature
>  Components: Spark Core
>Affects Versions: 2.4.0
>Reporter: Ameen Tayyebi
>Priority: Major
>
> AWS Glue Catalog is an external Hive metastore backed by a web service. It 
> allows permanent storage of catalog data for BigData use cases.
> To find out more information about AWS Glue, please consult:
>  * AWS Glue - [https://aws.amazon.com/glue/]
>  * Using Glue as a Metastore catalog for Spark - 
> [https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-glue.html]
> Today, the integration of Glue and Spark is through the Hive layer. Glue 
> implements the IMetaStore interface of Hive and for installations of Spark 
> that contain Hive, Glue can be used as the metastore.
> The feature set that Glue supports does not align 1-1 with the set of 
> features that the latest version of Spark supports. For example, Glue 
> interface supports more advanced partition pruning that the latest version of 
> Hive embedded in Spark.
> To enable a more natural integration with Spark and to allow leveraging 
> latest features of Glue, without being coupled to Hive, a direct integration 
> through Spark's own Catalog API is proposed. This Jira tracks this work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19755) Blacklist is always active for MesosCoarseGrainedSchedulerBackend. As result - scheduler cannot create an executor after some time.

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370002#comment-16370002
 ] 

Apache Spark commented on SPARK-19755:
--

User 'IgorBerman' has created a pull request for this issue:
https://github.com/apache/spark/pull/20640

> Blacklist is always active for MesosCoarseGrainedSchedulerBackend. As result 
> - scheduler cannot create an executor after some time.
> ---
>
> Key: SPARK-19755
> URL: https://issues.apache.org/jira/browse/SPARK-19755
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos, Scheduler
>Affects Versions: 2.1.0
> Environment: mesos, marathon, docker - driver and executors are 
> dockerized.
>Reporter: Timur Abakumov
>Priority: Major
>
> When for some reason task fails - MesosCoarseGrainedSchedulerBackend 
> increased failure counter for a slave where that task was running.
> When counter is >=2 (MAX_SLAVE_FAILURES) mesos slave is excluded.  
> Over time  scheduler cannot create a new executor - every slave is is in the 
> blacklist.  Task failure not necessary related to host health- especially for 
> long running stream apps.
> If accepted as a bug: possible solution is to use: spark.blacklist.enabled to 
> make that functionality optional and if it make sense   MAX_SLAVE_FAILURES 
> also can be configurable.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23240) PythonWorkerFactory issues unhelpful message when pyspark.daemon produces bogus stdout

2018-02-20 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-23240.
--
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20424
[https://github.com/apache/spark/pull/20424]

> PythonWorkerFactory issues unhelpful message when pyspark.daemon produces 
> bogus stdout
> --
>
> Key: SPARK-23240
> URL: https://issues.apache.org/jira/browse/SPARK-23240
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.2.1
>Reporter: Bruce Robbins
>Assignee: Bruce Robbins
>Priority: Minor
> Fix For: 2.4.0
>
>
> Environmental issues or site-local customizations (i.e., sitecustomize.py 
> present in the python install directory) can interfere with daemon.py’s 
> output to stdout. PythonWorkerFactory produces unhelpful messages when this 
> happens, causing some head scratching before the actual issue is determined.
> Case #1: Extraneous data in pyspark.daemon’s stdout. In this case, 
> PythonWorkerFactory uses the output as the daemon’s port number and ends up 
> throwing an exception when creating the socket:
> {noformat}
> java.lang.IllegalArgumentException: port out of range:1819239265
>   at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
>   at java.net.InetSocketAddress.(InetSocketAddress.java:188)
>   at java.net.Socket.(Socket.java:244)
>   at 
> org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:78)
> {noformat}
> Case #2: No data in pyspark.daemon’s stdout. In this case, 
> PythonWorkerFactory throws an EOFException exception reading the from the 
> Process input stream.
> The second case is somewhat less mysterious than the first, because 
> PythonWorkerFactory also displays the stderr from the python process.
> When there is unexpected or missing output in pyspark.daemon’s stdout, 
> PythonWorkerFactory should say so.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23240) PythonWorkerFactory issues unhelpful message when pyspark.daemon produces bogus stdout

2018-02-20 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-23240:


Assignee: Bruce Robbins

> PythonWorkerFactory issues unhelpful message when pyspark.daemon produces 
> bogus stdout
> --
>
> Key: SPARK-23240
> URL: https://issues.apache.org/jira/browse/SPARK-23240
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.2.1
>Reporter: Bruce Robbins
>Assignee: Bruce Robbins
>Priority: Minor
> Fix For: 2.4.0
>
>
> Environmental issues or site-local customizations (i.e., sitecustomize.py 
> present in the python install directory) can interfere with daemon.py’s 
> output to stdout. PythonWorkerFactory produces unhelpful messages when this 
> happens, causing some head scratching before the actual issue is determined.
> Case #1: Extraneous data in pyspark.daemon’s stdout. In this case, 
> PythonWorkerFactory uses the output as the daemon’s port number and ends up 
> throwing an exception when creating the socket:
> {noformat}
> java.lang.IllegalArgumentException: port out of range:1819239265
>   at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
>   at java.net.InetSocketAddress.(InetSocketAddress.java:188)
>   at java.net.Socket.(Socket.java:244)
>   at 
> org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:78)
> {noformat}
> Case #2: No data in pyspark.daemon’s stdout. In this case, 
> PythonWorkerFactory throws an EOFException exception reading the from the 
> Process input stream.
> The second case is somewhat less mysterious than the first, because 
> PythonWorkerFactory also displays the stderr from the python process.
> When there is unexpected or missing output in pyspark.daemon’s stdout, 
> PythonWorkerFactory should say so.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11182) HDFS Delegation Token will be expired when calling "UserGroupInformation.getCurrentUser.addCredentials" in HA mode

2018-02-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369914#comment-16369914
 ] 

Steve Loughran commented on SPARK-11182:


bug is in HDFS; been fixed in 2.8.2+ with cherry pickings to both CDH And HDP 
(AFAIK)

> HDFS Delegation Token will be expired when calling 
> "UserGroupInformation.getCurrentUser.addCredentials" in HA mode
> --
>
> Key: SPARK-11182
> URL: https://issues.apache.org/jira/browse/SPARK-11182
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.1
>Reporter: Liangliang Gu
>Priority: Major
>
> In HA mode, DFSClient will generate HDFS Delegation Token for each Name Node 
> automatically, which will not be updated when Spark update Credentials for 
> the current user.
> Spark should update these tokens in order to avoid Token Expired Error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23288) Incorrect number of written records in structured streaming

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23288:


Assignee: (was: Apache Spark)

> Incorrect number of written records in structured streaming
> ---
>
> Key: SPARK-23288
> URL: https://issues.apache.org/jira/browse/SPARK-23288
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Yuriy Bondaruk
>Priority: Major
>  Labels: Metrics, metrics
>
> I'm using SparkListener.onTaskEnd() to capture input and output metrics but 
> it seems that number of written records 
> ('taskEnd.taskMetrics().outputMetrics().recordsWritten()') is incorrect. Here 
> is my stream construction:
>  
> {code:java}
> StreamingQuery writeStream = session
> .readStream()
> .schema(RecordSchema.fromClass(TestRecord.class))
> .option(OPTION_KEY_DELIMITER, OPTION_VALUE_DELIMITER_TAB)
> .option(OPTION_KEY_QUOTE, OPTION_VALUE_QUOTATION_OFF)
> .csv(inputFolder.getRoot().toPath().toString())
> .as(Encoders.bean(TestRecord.class))
> .flatMap(
> ((FlatMapFunction) (u) -> {
> List resultIterable = new ArrayList<>();
> try {
> TestVendingRecord result = transformer.convert(u);
> resultIterable.add(result);
> } catch (Throwable t) {
> System.err.println("Ooops");
> t.printStackTrace();
> }
> return resultIterable.iterator();
> }),
> Encoders.bean(TestVendingRecord.class))
> .writeStream()
> .outputMode(OutputMode.Append())
> .format("parquet")
> .option("path", outputFolder.getRoot().toPath().toString())
> .option("checkpointLocation", 
> checkpointFolder.getRoot().toPath().toString())
> .start();
> writeStream.processAllAvailable();
> writeStream.stop();
> {code}
> Tested it with one good and one bad (throwing exception in 
> transformer.convert(u)) input records and it produces following metrics:
>  
> {code:java}
> (TestMain.java:onTaskEnd(73)) - ---status--> SUCCESS
> (TestMain.java:onTaskEnd(75)) - ---recordsWritten--> 0
> (TestMain.java:onTaskEnd(76)) - ---recordsRead-> 2
> (TestMain.java:onTaskEnd(83)) - taskEnd.taskInfo().accumulables():
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  323
> (TestMain.java:onTaskEnd(84)) - name = number of output rows
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  364
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.recordsRead
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.bytesRead
> (TestMain.java:onTaskEnd(85)) - value =  157
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.resultSerializationTime
> (TestMain.java:onTaskEnd(85)) - value =  3
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.resultSize
> (TestMain.java:onTaskEnd(85)) - value =  2396
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  633807000
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorRunTime
> (TestMain.java:onTaskEnd(85)) - value =  683
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  55662000
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeTime
> (TestMain.java:onTaskEnd(85)) - value =  58
> (TestMain.java:onTaskEnd(89)) - input records 2
> Streaming query made progress: {
>   "id" : "1231f9cb-b2e8-4d10-804d-73d7826c1cb5",
>   "runId" : "bd23b60c-93f9-4e17-b3bc-55403edce4e7",
>   "name" : null,
>   "timestamp" : "2018-01-26T14:44:05.362Z",
>   "numInputRows" : 2,
>   "processedRowsPerSecond" : 0.8163265306122448,
>   "durationMs" : {
> "addBatch" : 1994,
> "getBatch" : 126,
> "getOffset" : 52,
> "queryPlanning" : 220,
> "triggerExecution" : 2450,
> "walCommit" : 41
>   },
>   "stateOperators" : [ ],
>   "sources" : [ {
> "description" : 
> "FileStreamSource[file:/var/folders/4w/zks_kfls2s3glmrj3f725p7hllyb5_/T/junit3661035412295337071]",
> "startOffset" : null,
> "endOffset" : {
>   "logOffset" : 0
> },
> "numInputRows" : 2,
> "processedRowsPerSecond" : 0.8163265306122448
>   } ],
>   "sink" : {
> "description" : 
> "FileSink[/var/folders/4w/zks_kfls2s3glmrj3f725p7hllyb5_/T/junit3785605384928624065]"
>   }
> }
>

[jira] [Commented] (SPARK-23288) Incorrect number of written records in structured streaming

2018-02-20 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369898#comment-16369898
 ] 

Apache Spark commented on SPARK-23288:
--

User 'gaborgsomogyi' has created a pull request for this issue:
https://github.com/apache/spark/pull/20639

> Incorrect number of written records in structured streaming
> ---
>
> Key: SPARK-23288
> URL: https://issues.apache.org/jira/browse/SPARK-23288
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Yuriy Bondaruk
>Priority: Major
>  Labels: Metrics, metrics
>
> I'm using SparkListener.onTaskEnd() to capture input and output metrics but 
> it seems that number of written records 
> ('taskEnd.taskMetrics().outputMetrics().recordsWritten()') is incorrect. Here 
> is my stream construction:
>  
> {code:java}
> StreamingQuery writeStream = session
> .readStream()
> .schema(RecordSchema.fromClass(TestRecord.class))
> .option(OPTION_KEY_DELIMITER, OPTION_VALUE_DELIMITER_TAB)
> .option(OPTION_KEY_QUOTE, OPTION_VALUE_QUOTATION_OFF)
> .csv(inputFolder.getRoot().toPath().toString())
> .as(Encoders.bean(TestRecord.class))
> .flatMap(
> ((FlatMapFunction) (u) -> {
> List resultIterable = new ArrayList<>();
> try {
> TestVendingRecord result = transformer.convert(u);
> resultIterable.add(result);
> } catch (Throwable t) {
> System.err.println("Ooops");
> t.printStackTrace();
> }
> return resultIterable.iterator();
> }),
> Encoders.bean(TestVendingRecord.class))
> .writeStream()
> .outputMode(OutputMode.Append())
> .format("parquet")
> .option("path", outputFolder.getRoot().toPath().toString())
> .option("checkpointLocation", 
> checkpointFolder.getRoot().toPath().toString())
> .start();
> writeStream.processAllAvailable();
> writeStream.stop();
> {code}
> Tested it with one good and one bad (throwing exception in 
> transformer.convert(u)) input records and it produces following metrics:
>  
> {code:java}
> (TestMain.java:onTaskEnd(73)) - ---status--> SUCCESS
> (TestMain.java:onTaskEnd(75)) - ---recordsWritten--> 0
> (TestMain.java:onTaskEnd(76)) - ---recordsRead-> 2
> (TestMain.java:onTaskEnd(83)) - taskEnd.taskInfo().accumulables():
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  323
> (TestMain.java:onTaskEnd(84)) - name = number of output rows
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  364
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.recordsRead
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.bytesRead
> (TestMain.java:onTaskEnd(85)) - value =  157
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.resultSerializationTime
> (TestMain.java:onTaskEnd(85)) - value =  3
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.resultSize
> (TestMain.java:onTaskEnd(85)) - value =  2396
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  633807000
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorRunTime
> (TestMain.java:onTaskEnd(85)) - value =  683
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  55662000
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeTime
> (TestMain.java:onTaskEnd(85)) - value =  58
> (TestMain.java:onTaskEnd(89)) - input records 2
> Streaming query made progress: {
>   "id" : "1231f9cb-b2e8-4d10-804d-73d7826c1cb5",
>   "runId" : "bd23b60c-93f9-4e17-b3bc-55403edce4e7",
>   "name" : null,
>   "timestamp" : "2018-01-26T14:44:05.362Z",
>   "numInputRows" : 2,
>   "processedRowsPerSecond" : 0.8163265306122448,
>   "durationMs" : {
> "addBatch" : 1994,
> "getBatch" : 126,
> "getOffset" : 52,
> "queryPlanning" : 220,
> "triggerExecution" : 2450,
> "walCommit" : 41
>   },
>   "stateOperators" : [ ],
>   "sources" : [ {
> "description" : 
> "FileStreamSource[file:/var/folders/4w/zks_kfls2s3glmrj3f725p7hllyb5_/T/junit3661035412295337071]",
> "startOffset" : null,
> "endOffset" : {
>   "logOffset" : 0
> },
> "numInputRows" : 2,
> "processedRowsPerSecond" : 0.8163265306122448
>   } ],
>   "sink" : {
> 

[jira] [Assigned] (SPARK-23288) Incorrect number of written records in structured streaming

2018-02-20 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23288:


Assignee: Apache Spark

> Incorrect number of written records in structured streaming
> ---
>
> Key: SPARK-23288
> URL: https://issues.apache.org/jira/browse/SPARK-23288
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Structured Streaming
>Affects Versions: 2.2.0
>Reporter: Yuriy Bondaruk
>Assignee: Apache Spark
>Priority: Major
>  Labels: Metrics, metrics
>
> I'm using SparkListener.onTaskEnd() to capture input and output metrics but 
> it seems that number of written records 
> ('taskEnd.taskMetrics().outputMetrics().recordsWritten()') is incorrect. Here 
> is my stream construction:
>  
> {code:java}
> StreamingQuery writeStream = session
> .readStream()
> .schema(RecordSchema.fromClass(TestRecord.class))
> .option(OPTION_KEY_DELIMITER, OPTION_VALUE_DELIMITER_TAB)
> .option(OPTION_KEY_QUOTE, OPTION_VALUE_QUOTATION_OFF)
> .csv(inputFolder.getRoot().toPath().toString())
> .as(Encoders.bean(TestRecord.class))
> .flatMap(
> ((FlatMapFunction) (u) -> {
> List resultIterable = new ArrayList<>();
> try {
> TestVendingRecord result = transformer.convert(u);
> resultIterable.add(result);
> } catch (Throwable t) {
> System.err.println("Ooops");
> t.printStackTrace();
> }
> return resultIterable.iterator();
> }),
> Encoders.bean(TestVendingRecord.class))
> .writeStream()
> .outputMode(OutputMode.Append())
> .format("parquet")
> .option("path", outputFolder.getRoot().toPath().toString())
> .option("checkpointLocation", 
> checkpointFolder.getRoot().toPath().toString())
> .start();
> writeStream.processAllAvailable();
> writeStream.stop();
> {code}
> Tested it with one good and one bad (throwing exception in 
> transformer.convert(u)) input records and it produces following metrics:
>  
> {code:java}
> (TestMain.java:onTaskEnd(73)) - ---status--> SUCCESS
> (TestMain.java:onTaskEnd(75)) - ---recordsWritten--> 0
> (TestMain.java:onTaskEnd(76)) - ---recordsRead-> 2
> (TestMain.java:onTaskEnd(83)) - taskEnd.taskInfo().accumulables():
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  323
> (TestMain.java:onTaskEnd(84)) - name = number of output rows
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = duration total (min, med, max)
> (TestMain.java:onTaskEnd(85)) - value =  364
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.recordsRead
> (TestMain.java:onTaskEnd(85)) - value =  2
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.input.bytesRead
> (TestMain.java:onTaskEnd(85)) - value =  157
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.resultSerializationTime
> (TestMain.java:onTaskEnd(85)) - value =  3
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.resultSize
> (TestMain.java:onTaskEnd(85)) - value =  2396
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  633807000
> (TestMain.java:onTaskEnd(84)) - name = internal.metrics.executorRunTime
> (TestMain.java:onTaskEnd(85)) - value =  683
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeCpuTime
> (TestMain.java:onTaskEnd(85)) - value =  55662000
> (TestMain.java:onTaskEnd(84)) - name = 
> internal.metrics.executorDeserializeTime
> (TestMain.java:onTaskEnd(85)) - value =  58
> (TestMain.java:onTaskEnd(89)) - input records 2
> Streaming query made progress: {
>   "id" : "1231f9cb-b2e8-4d10-804d-73d7826c1cb5",
>   "runId" : "bd23b60c-93f9-4e17-b3bc-55403edce4e7",
>   "name" : null,
>   "timestamp" : "2018-01-26T14:44:05.362Z",
>   "numInputRows" : 2,
>   "processedRowsPerSecond" : 0.8163265306122448,
>   "durationMs" : {
> "addBatch" : 1994,
> "getBatch" : 126,
> "getOffset" : 52,
> "queryPlanning" : 220,
> "triggerExecution" : 2450,
> "walCommit" : 41
>   },
>   "stateOperators" : [ ],
>   "sources" : [ {
> "description" : 
> "FileStreamSource[file:/var/folders/4w/zks_kfls2s3glmrj3f725p7hllyb5_/T/junit3661035412295337071]",
> "startOffset" : null,
> "endOffset" : {
>   "logOffset" : 0
> },
> "numInputRows" : 2,
> "processedRowsPerSecond" : 0.8163265306122448
>   } ],
>   "sink" : {
> "description" : 
> "FileSink[/var/folders/4w/zks_kfls2s3glmrj3f725p7hllyb5_/T/junit37856053

[jira] [Commented] (SPARK-11182) HDFS Delegation Token will be expired when calling "UserGroupInformation.getCurrentUser.addCredentials" in HA mode

2018-02-20 Thread Sumit Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369897#comment-16369897
 ] 

Sumit Nigam commented on SPARK-11182:
-

Also, is adding --conf spark.hadoop.fs.hdfs.impl.disable.cache=true  a 
workaround which can be used in the interim?

> HDFS Delegation Token will be expired when calling 
> "UserGroupInformation.getCurrentUser.addCredentials" in HA mode
> --
>
> Key: SPARK-11182
> URL: https://issues.apache.org/jira/browse/SPARK-11182
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.1
>Reporter: Liangliang Gu
>Priority: Major
>
> In HA mode, DFSClient will generate HDFS Delegation Token for each Name Node 
> automatically, which will not be updated when Spark update Credentials for 
> the current user.
> Spark should update these tokens in order to avoid Token Expired Error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11182) HDFS Delegation Token will be expired when calling "UserGroupInformation.getCurrentUser.addCredentials" in HA mode

2018-02-20 Thread Sumit Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369896#comment-16369896
 ] 

Sumit Nigam commented on SPARK-11182:
-

Is this issue fixed in later versions of spark?

> HDFS Delegation Token will be expired when calling 
> "UserGroupInformation.getCurrentUser.addCredentials" in HA mode
> --
>
> Key: SPARK-11182
> URL: https://issues.apache.org/jira/browse/SPARK-11182
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.1
>Reporter: Liangliang Gu
>Priority: Major
>
> In HA mode, DFSClient will generate HDFS Delegation Token for each Name Node 
> automatically, which will not be updated when Spark update Credentials for 
> the current user.
> Spark should update these tokens in order to avoid Token Expired Error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23203) DataSourceV2 should use immutable trees.

2018-02-20 Thread Wenchen Fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-23203.
-
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20387
[https://github.com/apache/spark/pull/20387]

> DataSourceV2 should use immutable trees.
> 
>
> Key: SPARK-23203
> URL: https://issues.apache.org/jira/browse/SPARK-23203
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
>Priority: Blocker
> Fix For: 2.4.0
>
>
> The DataSourceV2 integration doesn't use [immutable 
> trees|https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html],
>  which is a basic requirement of Catalyst. The v2 relation should not wrap a 
> mutable reader and change the logical plan by pushing projections and filters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23203) DataSourceV2 should use immutable trees.

2018-02-20 Thread Wenchen Fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan reassigned SPARK-23203:
---

Assignee: Ryan Blue

> DataSourceV2 should use immutable trees.
> 
>
> Key: SPARK-23203
> URL: https://issues.apache.org/jira/browse/SPARK-23203
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Ryan Blue
>Assignee: Ryan Blue
>Priority: Blocker
> Fix For: 2.4.0
>
>
> The DataSourceV2 integration doesn't use [immutable 
> trees|https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html],
>  which is a basic requirement of Catalyst. The v2 relation should not wrap a 
> mutable reader and change the logical plan by pushing projections and filters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org