[jira] [Created] (CARBONDATA-1205) Use Spark 2.1 and Hadoop 2.7.2 as default from 1.2.0 onwards

2017-06-21 Thread Liang Chen (JIRA)
Liang Chen created CARBONDATA-1205:
--

 Summary: Use Spark 2.1 and Hadoop 2.7.2 as default from 1.2.0 
onwards
 Key: CARBONDATA-1205
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1205
 Project: CarbonData
  Issue Type: Task
  Components: build
Reporter: Liang Chen


>From 1.2.0, there are many features developing based on Spark 2.1 and Hadoop
2.7.2,  this task is to use Spark2.1 and Hadoop 2.7.2 as default compilation in 
parent pom.

Discussion session at : 
https://lists.apache.org/thread.html/5b186b5868de16280ced1b623fae8b5c54933def44398f6a4310ffb3@%3Cdev.carbondata.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1206) 10. implement range_interval partition

2017-06-21 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-1206:


 Summary: 10. implement range_interval partition
 Key: CARBONDATA-1206
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1206
 Project: CarbonData
  Issue Type: Sub-task
Reporter: QiangCai






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1207) Resource leak problem in CarbonDictionaryWriter

2017-06-21 Thread Mohammad Shahid Khan (JIRA)
Mohammad Shahid Khan created CARBONDATA-1207:


 Summary: Resource leak problem in CarbonDictionaryWriter
 Key: CARBONDATA-1207
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1207
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan


If in load during dictionary generation some exception happens during the 
dictionary writing or dictionary meta data writing, then stream will never be 
closed.
This will load DOS on the incremental load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1208) 11. specify partition class to implement custom partition

2017-06-21 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-1208:


 Summary: 11.  specify partition class to implement custom partition
 Key: CARBONDATA-1208
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1208
 Project: CarbonData
  Issue Type: Sub-task
Reporter: QiangCai






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1209) 11. specify partition class to implement custom partition

2017-06-21 Thread QiangCai (JIRA)
QiangCai created CARBONDATA-1209:


 Summary: 11. specify partition class to implement custom partition
 Key: CARBONDATA-1209
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1209
 Project: CarbonData
  Issue Type: Sub-task
Reporter: QiangCai






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1210) Exception should be thrown on bad record write failure to log file or csv file.

2017-06-21 Thread Mohammad Shahid Khan (JIRA)
Mohammad Shahid Khan created CARBONDATA-1210:


 Summary: Exception should be thrown on bad record write failure to 
log file or csv file.
 Key: CARBONDATA-1210
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1210
 Project: CarbonData
  Issue Type: Bug
Reporter: Mohammad Shahid Khan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1211) Implicit Column Projection

2017-06-21 Thread sounak chakraborty (JIRA)
sounak chakraborty created CARBONDATA-1211:
--

 Summary: Implicit Column Projection
 Key: CARBONDATA-1211
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1211
 Project: CarbonData
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.0
Reporter: sounak chakraborty


Garbage values coming when projection is being done on implicit column i.e. 
tupleId. Only occurs when vector reader is enabled. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1212) Memory leak in case of compaction when unsafe is true

2017-06-21 Thread Manish Gupta (JIRA)
Manish Gupta created CARBONDATA-1212:


 Summary: Memory leak in case of compaction when unsafe is true
 Key: CARBONDATA-1212
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1212
 Project: CarbonData
  Issue Type: Bug
Reporter: Manish Gupta
Assignee: Manish Gupta
 Fix For: 1.2.0
 Attachments: data.csv

In case of compaction, queryExecutor object is formed for multiple blocks but 
the objects are not retained and finish method is called only on the last 
instance created for query executor. Due to this the memory allocated to 
precious objects is not released which can lead to out of memory issue.

Steps to reproduce:
--
CREATE TABLE IF NOT EXISTS t3 (ID Int, date Date, country String, name String, 
phonetype String, serialname char(10), salary Int) STORED BY 'carbondata' 
TBLPROPERTIES('DICTIONARY_EXCLUDE'='name')
LOAD DATA LOCAL INPATH 'data.csv' into table t3
LOAD DATA LOCAL INPATH 'data.csv' into table t3
alter table t3 compact 'major'




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)