[jira] [Created] (CARBONDATA-1198) Change Unsafe configuration to dynamic

2017-06-20 Thread Jacky Li (JIRA)
Jacky Li created CARBONDATA-1198: Summary: Change Unsafe configuration to dynamic Key: CARBONDATA-1198 URL: https://issues.apache.org/jira/browse/CARBONDATA-1198 Project: CarbonData Issue Typ

[jira] [Created] (CARBONDATA-1199) Change Unsafe configuration to dynamic

2017-06-20 Thread Jacky Li (JIRA)
Jacky Li created CARBONDATA-1199: Summary: Change Unsafe configuration to dynamic Key: CARBONDATA-1199 URL: https://issues.apache.org/jira/browse/CARBONDATA-1199 Project: CarbonData Issue Typ

[jira] [Created] (CARBONDATA-1200) update data failed on spark 1.6.2

2017-06-20 Thread Jarck (JIRA)
Jarck created CARBONDATA-1200: - Summary: update data failed on spark 1.6.2 Key: CARBONDATA-1200 URL: https://issues.apache.org/jira/browse/CARBONDATA-1200 Project: CarbonData Issue Type: Bug

[jira] [Created] (CARBONDATA-1201) don't support insert syntax "insert into table select constants" on spark 1.6.2

2017-06-20 Thread Jarck (JIRA)
Jarck created CARBONDATA-1201: - Summary: don't support insert syntax "insert into table select constants" on spark 1.6.2 Key: CARBONDATA-1201 URL: https://issues.apache.org/jira/browse/CARBONDATA-1201

Question

2017-06-20 Thread Lu Cao
Hi dev, Any one knows why the decimal type in compaction flow is processed as below in CarbonFactDataHandlerColumnar ? I can't understand according to the comments. // convert measure columns for (int i = 0; i < type.length; i++) { Object value = rows[i]; // in compaction flow the measure wi

Re: Question

2017-06-20 Thread Ravindra Pesala
Hi, it is because compaction flow uses query flow, It queries the data from the segments which needs to be compacted and sends for merge sort. So writer step gets the spark row data thats why it has spark decimal in compaction. Regards, Ravindra. On 20 June 2017 at 16:06, Lu Cao wrote: > Hi de

Re: Question

2017-06-20 Thread Cao Lu 曹鲁
Got it, thank you Ravi! On 6/20/17, 11:11 PM, "Ravindra Pesala" wrote: >Hi, > >it is because compaction flow uses query flow, It queries the data from the >segments which needs to be compacted and sends for merge sort. So writer >step gets the spark row data thats why it has spark decimal in

[jira] [Created] (CARBONDATA-1202) delete data failed on spark 1.6.2

2017-06-20 Thread Jarck (JIRA)
Jarck created CARBONDATA-1202: - Summary: delete data failed on spark 1.6.2 Key: CARBONDATA-1202 URL: https://issues.apache.org/jira/browse/CARBONDATA-1202 Project: CarbonData Issue Type: Bug

[jira] [Created] (CARBONDATA-1203) insert data caused many duplicated data on spark 1.6.2

2017-06-20 Thread Jarck (JIRA)
Jarck created CARBONDATA-1203: - Summary: insert data caused many duplicated data on spark 1.6.2 Key: CARBONDATA-1203 URL: https://issues.apache.org/jira/browse/CARBONDATA-1203 Project: CarbonData

[jira] [Created] (CARBONDATA-1204) Update operation fail and generate extra records when test with big data

2017-06-20 Thread chenerlu (JIRA)
chenerlu created CARBONDATA-1204: Summary: Update operation fail and generate extra records when test with big data Key: CARBONDATA-1204 URL: https://issues.apache.org/jira/browse/CARBONDATA-1204 Proj

[GitHub] carbondata-site issue #44: Enhance And Fixed UI Bugs

2017-06-20 Thread sgururajshetty
Github user sgururajshetty commented on the issue: https://github.com/apache/carbondata-site/pull/44 LGTM @chenliang613 please review --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this