[jira] [Commented] (CARBONDATA-521) Depends on more stable class of spark in spark2

2016-12-10 Thread Fei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15739172#comment-15739172
 ] 

Fei Wang commented on CARBONDATA-521:
-

plz not close this, until we refectory all the dependency with spark

> Depends on more stable class of spark in spark2
> ---
>
> Key: CARBONDATA-521
> URL: https://issues.apache.org/jira/browse/CARBONDATA-521
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: spark-integration
>Reporter: Fei Wang
> Fix For: 1.0.0-incubating
>
>
> avoid to use unstable class in spark2, otherwise it leads to compatible issue 
> with spark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-521) Depends on more stable class of spark in spark2

2016-12-10 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-521:
---

 Summary: Depends on more stable class of spark in spark2
 Key: CARBONDATA-521
 URL: https://issues.apache.org/jira/browse/CARBONDATA-521
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Fei Wang


avoid to use unstable class in spark2, otherwise it leads to compatible issue 
with spark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-520) Executor can not get the read support class

2016-12-10 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-520:
---

 Summary: Executor can not get the read support class 
 Key: CARBONDATA-520
 URL: https://issues.apache.org/jira/browse/CARBONDATA-520
 Project: CarbonData
  Issue Type: Sub-task
  Components: spark-integration
Reporter: Fei Wang
Assignee: Fei Wang


Executor can not get the read support class, this leads to cast exception when 
running carbon on spark2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-517) Use carbon property to get the store path/kettle home

2016-12-09 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-517:
---

 Summary: Use carbon property to get the store path/kettle home
 Key: CARBONDATA-517
 URL: https://issues.apache.org/jira/browse/CARBONDATA-517
 Project: CarbonData
  Issue Type: Sub-task
  Components: spark-integration
Affects Versions: 0.2.0-incubating
Reporter: Fei Wang
Assignee: Fei Wang


to distinguish the carbon config with spark config. for carbon config we use 
carbon property to get them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-491) do not use runnablecommand in spark2

2016-12-03 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-491:
---

 Summary: do not use runnablecommand in spark2
 Key: CARBONDATA-491
 URL: https://issues.apache.org/jira/browse/CARBONDATA-491
 Project: CarbonData
  Issue Type: Sub-task
  Components: spark-integration
Affects Versions: 0.3.0-incubating
Reporter: Fei Wang


we should not use the runnablecommand in spark, that may leads to some 
compatibility issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CARBONDATA-489) spark2 decimal issue

2016-12-02 Thread Fei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Wang reassigned CARBONDATA-489:
---

Assignee: Fei Wang

> spark2 decimal issue
> 
>
> Key: CARBONDATA-489
> URL: https://issues.apache.org/jira/browse/CARBONDATA-489
> Project: CarbonData
>  Issue Type: Sub-task
>  Components: spark-integration
>Reporter: Fei Wang
>Assignee: Fei Wang
> Fix For: 0.3.0-incubating
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> create a table with decimal field and query it will throw error, do not 
> support decimal(0, 0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-489) spark2 decimal issue

2016-12-02 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-489:
---

 Summary: spark2 decimal issue
 Key: CARBONDATA-489
 URL: https://issues.apache.org/jira/browse/CARBONDATA-489
 Project: CarbonData
  Issue Type: Sub-task
  Components: spark-integration
Reporter: Fei Wang


create a table with decimal field and query it will throw error, do not support 
decimal(0, 0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-323) Fix the load data local syntax

2016-10-18 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-323:
---

 Summary: Fix the load data local syntax
 Key: CARBONDATA-323
 URL: https://issues.apache.org/jira/browse/CARBONDATA-323
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Affects Versions: 0.1.0-incubating
Reporter: Fei Wang
Assignee: Fei Wang
 Fix For: 0.2.0-incubating


carbon should not support load data local syntax, so fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-322) integrate spark 2.x

2016-10-18 Thread Fei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Wang updated CARBONDATA-322:

Description: 
As spark 2.0 released. there are many nice features such as more efficient 
parser, vectorized execution, adaptive execution. 
It is good to integrate with spark 2.x

Another side now spark integration is heavy coupling with spark, we should 
redesign the spark integration, it should satisfy flowing requirement:

1. decoupled with spark, integrate according to spark datasource API(V2)
2. This integration should support vectorized carbon reader
3. Supoort write to carbondata from dadatrame
...


  was:
As spark 2.0 released. there are many nice features such as more efficient 
parser, vectorized execution, adaptive execution. It is good to integrate with 
spark 2.x

Another side now spark integration is heavy coupling with spark, we should 
redesign the spark integration, it should satisfy flowing requirement:

1. decoupled with spark, integrate according to spark datasource API(V2)
2. This integration should support vectorized carbon reader
3. Supoort write to carbondata from dadatrame
...



> integrate spark 2.x 
> 
>
> Key: CARBONDATA-322
> URL: https://issues.apache.org/jira/browse/CARBONDATA-322
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Affects Versions: 0.2.0-incubating
>Reporter: Fei Wang
> Fix For: 0.3.0-incubating
>
>
> As spark 2.0 released. there are many nice features such as more efficient 
> parser, vectorized execution, adaptive execution. 
> It is good to integrate with spark 2.x
> Another side now spark integration is heavy coupling with spark, we should 
> redesign the spark integration, it should satisfy flowing requirement:
> 1. decoupled with spark, integrate according to spark datasource API(V2)
> 2. This integration should support vectorized carbon reader
> 3. Supoort write to carbondata from dadatrame
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-322) integrate spark 2.x

2016-10-18 Thread Fei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Wang updated CARBONDATA-322:

Description: 
As spark 2.0 released. there are many nice features such as more efficient 
parser, vectorized execution, adaptive execution. 
It is good to integrate with spark 2.x

Another side now in carbondata, spark integration is heavy coupling with spark 
code and the code need clean, we should redesign the spark integration, it 
should satisfy flowing requirement:

1. decoupled with spark, integrate according to spark datasource API(V2)
2. This integration should support vectorized carbon reader
3. Supoort write to carbondata from dadatrame
...


  was:
As spark 2.0 released. there are many nice features such as more efficient 
parser, vectorized execution, adaptive execution. 
It is good to integrate with spark 2.x

Another side now spark integration is heavy coupling with spark, we should 
redesign the spark integration, it should satisfy flowing requirement:

1. decoupled with spark, integrate according to spark datasource API(V2)
2. This integration should support vectorized carbon reader
3. Supoort write to carbondata from dadatrame
...



> integrate spark 2.x 
> 
>
> Key: CARBONDATA-322
> URL: https://issues.apache.org/jira/browse/CARBONDATA-322
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Affects Versions: 0.2.0-incubating
>Reporter: Fei Wang
> Fix For: 0.3.0-incubating
>
>
> As spark 2.0 released. there are many nice features such as more efficient 
> parser, vectorized execution, adaptive execution. 
> It is good to integrate with spark 2.x
> Another side now in carbondata, spark integration is heavy coupling with 
> spark code and the code need clean, we should redesign the spark integration, 
> it should satisfy flowing requirement:
> 1. decoupled with spark, integrate according to spark datasource API(V2)
> 2. This integration should support vectorized carbon reader
> 3. Supoort write to carbondata from dadatrame
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-322) integrate spark 2.x

2016-10-18 Thread Fei Wang (JIRA)
Fei Wang created CARBONDATA-322:
---

 Summary: integrate spark 2.x 
 Key: CARBONDATA-322
 URL: https://issues.apache.org/jira/browse/CARBONDATA-322
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Affects Versions: 0.2.0-incubating
Reporter: Fei Wang
 Fix For: 0.3.0-incubating


As spark 2.0 released. there are many nice features such as more efficient 
parser, vectorized execution, adaptive execution. It is good to integrate with 
spark 2.x

Another side now spark integration is heavy coupling with spark, we should 
redesign the spark integration, it should satisfy flowing requirement:

1. decoupled with spark, integrate according to spark datasource API(V2)
2. This integration should support vectorized carbon reader
3. Supoort write to carbondata from dadatrame
...




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)