[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-11-01 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Target Version/s: 2.2.0  (was: 2.1.0)

> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, 
> we do not need to distinguish tables stored in hive formats and data source 
> tables).
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2017-06-01 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-15691:
-
Target Version/s: 2.3.0  (was: 2.2.0)

> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, 
> we do not need to distinguish tables stored in hive formats and data source 
> tables).
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-05-31 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Summary: Refactor and improve Hive support  (was: Refactor Hive support)

> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-05-31 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Description: 
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
- Move the Hive specific catalog logic (e.g. using properties to store data 
source options) into HiveExternalCatalog.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.



  was:
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to 




> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
> - Move the Hive specific catalog logic (e.g. using properties to store data 
> source options) into HiveExternalCatalog.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-05-31 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Description: 
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
- Move the Hive specific catalog logic (e.g. using properties to store data 
source options) into HiveExternalCatalog.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.


  was:
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
- Move the Hive specific catalog logic (e.g. using properties to store data 
source options) into HiveExternalCatalog.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.




> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
> - Move the Hive specific catalog logic (e.g. using properties to store data 
> source options) into HiveExternalCatalog.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-06-01 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Description: 
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Move the Hive specific catalog logic into HiveExternalCatalog.
  - Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
  - Move using properties to store data source options into HiveExternalCatalog.
  - Potentially more.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.


  was:
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
- Move the Hive specific catalog logic (e.g. using properties to store data 
source options) into HiveExternalCatalog.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.



> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   - Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   - Move using properties to store data source options into 
> HiveExternalCatalog.
>   - Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-06-01 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-15691:

Description: 
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Move the Hive specific catalog logic into HiveExternalCatalog.
  -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
  -- Move using properties to store data source options into 
HiveExternalCatalog.
  -- Potentially more.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.


  was:
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Move the Hive specific catalog logic into HiveExternalCatalog.
  - Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
  - Move using properties to store data source options into HiveExternalCatalog.
  - Potentially more.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.



> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog.
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2016-06-17 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-15691:
-
Description: 
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Move the Hive specific catalog logic into HiveExternalCatalog.
  -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
  -- Move using properties to store data source options into 
HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, we 
do not need to distinguish tables stored in hive formats and data source 
tables).
  -- Potentially more.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.


  was:
Hive support is important to Spark SQL, as many Spark users use it to read from 
Hive. The current architecture is very difficult to maintain, and this ticket 
tracks progress towards getting us to a sane state.

A number of things we want to accomplish are:

- Move the Hive specific catalog logic into HiveExternalCatalog.
  -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
HiveExternalCatalog. This would require moving caching either into 
HiveExternalCatalog, or just into SessionCatalog.
  -- Move using properties to store data source options into 
HiveExternalCatalog.
  -- Potentially more.
- Remove HIve's specific ScriptTransform implementation and make it more 
general so we can put it in sql/core.
- Implement HiveTableScan (and write path) as a data source, so we don't need a 
special planner rule for HiveTableScan.
- Remove HiveSharedState and HiveSessionState.

One thing that is still unclear to me is how to work with Hive UDF support. We 
might still need a special planner rule there.



> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, 
> we do not need to distinguish tables stored in hive formats and data source 
> tables).
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2018-01-08 Thread Sameer Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sameer Agarwal updated SPARK-15691:
---
Target Version/s: 2.4.0  (was: 2.3.0)

> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, 
> we do not need to distinguish tables stored in hive formats and data source 
> tables).
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-15691) Refactor and improve Hive support

2018-09-11 Thread Wenchen Fan (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan updated SPARK-15691:

Target Version/s: 3.0.0  (was: 2.4.0)

> Refactor and improve Hive support
> -
>
> Key: SPARK-15691
> URL: https://issues.apache.org/jira/browse/SPARK-15691
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Reporter: Reynold Xin
>Priority: Major
>
> Hive support is important to Spark SQL, as many Spark users use it to read 
> from Hive. The current architecture is very difficult to maintain, and this 
> ticket tracks progress towards getting us to a sane state.
> A number of things we want to accomplish are:
> - Move the Hive specific catalog logic into HiveExternalCatalog.
>   -- Remove HiveSessionCatalog. All Hive-related stuff should go into 
> HiveExternalCatalog. This would require moving caching either into 
> HiveExternalCatalog, or just into SessionCatalog.
>   -- Move using properties to store data source options into 
> HiveExternalCatalog (So, for a CatalogTable returned by HiveExternalCatalog, 
> we do not need to distinguish tables stored in hive formats and data source 
> tables).
>   -- Potentially more.
> - Remove HIve's specific ScriptTransform implementation and make it more 
> general so we can put it in sql/core.
> - Implement HiveTableScan (and write path) as a data source, so we don't need 
> a special planner rule for HiveTableScan.
> - Remove HiveSharedState and HiveSessionState.
> One thing that is still unclear to me is how to work with Hive UDF support. 
> We might still need a special planner rule there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org