[jira] [Updated] (CARBONDATA-783) Loading data with Single Pass 'true' option is throwing an exception

2017-03-20 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta updated CARBONDATA-783:
-
Summary: Loading data with Single Pass 'true' option is throwing an 
exception  (was: Single Pass Loading is not working properly )

> Loading data with Single Pass 'true' option is throwing an exception
> 
>
> Key: CARBONDATA-783
> URL: https://issues.apache.org/jira/browse/CARBONDATA-783
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.1.0-incubating
> Environment: spark 2.1
>Reporter: Geetika Gupta
>Priority: Trivial
> Attachments: 7000_UniqData.csv
>
>
> I tried to create table using the following query:
> CREATE TABLE uniq_include_dictionary (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,Double_COLUMN2,DECIMAL_COLUMN2');
> Table creation was successfull but when I tried to load data into the table 
> It showed the following error:
> ERROR 16-03 13:41:32,354 - nioEventLoopGroup-8-2 
> java.lang.IndexOutOfBoundsException: readerIndex(64) + length(25) exceeds 
> writerIndex(80): UnpooledUnsafeDirectByteBuf(ridx: 64, widx: 80, cap: 80)
>   at 
> io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1161)
>   at 
> io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1155)
>   at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:694)
>   at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:702)
>   at 
> org.apache.carbondata.core.dictionary.generator.key.DictionaryMessage.readData(DictionaryMessage.java:70)
>   at 
> org.apache.carbondata.core.dictionary.server.DictionaryServerHandler.channelRead(DictionaryServerHandler.java:59)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
>   at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>   at java.lang.Thread.run(Thread.java:745)
> ERROR 16-03 13:41:32,355 - nioEventLoopGroup-8-2 exceptionCaught
> java.lang.IndexOutOfBoundsException: readerIndex(64) + length(25) exceeds 
> writerIndex(80): UnpooledUnsafeDirectByteBuf(ridx: 64, widx: 80, cap: 80)
>   at 
> io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1161)
>   at 
> io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1155)
>   at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:694)
>   at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:702)
>   at 
> org.apache.carbondata.core.dictionary.generator.key.DictionaryMessage.readData(DictionaryMessage.java:70)
>   at 
> org.apache.carbondata.core.dictionary.server.DictionaryServerHandler.channelRead(DictionaryServerHandler.java:59)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>  

[jira] [Created] (CARBONDATA-788) Like operator is not working properly

2017-03-17 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-788:


 Summary: Like operator is not working properly
 Key: CARBONDATA-788
 URL: https://issues.apache.org/jira/browse/CARBONDATA-788
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0-incubating
 Environment: spark 2.1
Reporter: Geetika Gupta
Priority: Trivial
 Attachments: 2000_UniqData.csv

I tried to create a table using the following command:

CREATE TABLE uniqdata_INC(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format';

Load command for the table :

LOAD DATA INPATH 
'hdfs://localhost:54311/BabuStore/DATA/uniqdata/2000_UniqData.csv' into table 
uniqdata_INC OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

When I performed the below query on the table using 'like' operator, it 
displayed no results.

select cust_id from uniqdata_INC where cust_id like 8999;

Result:
+--+--+
| cust_id  |
+--+--+
+--+--+
No rows selected (0.515 seconds)

PFA the csv used for input data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-783) Single Pass Loading is not working properly

2017-03-16 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-783:


 Summary: Single Pass Loading is not working properly 
 Key: CARBONDATA-783
 URL: https://issues.apache.org/jira/browse/CARBONDATA-783
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.1.0-incubating
 Environment: spark 2.1
Reporter: Geetika Gupta
Priority: Trivial
 Attachments: 7000_UniqData.csv

I tried to create table using the following query:

CREATE TABLE uniq_include_dictionary (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,Double_COLUMN2,DECIMAL_COLUMN2');

Table creation was successfull but when I tried to load data into the table It 
showed the following error:
ERROR 16-03 13:41:32,354 - nioEventLoopGroup-8-2 
java.lang.IndexOutOfBoundsException: readerIndex(64) + length(25) exceeds 
writerIndex(80): UnpooledUnsafeDirectByteBuf(ridx: 64, widx: 80, cap: 80)
at 
io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1161)
at 
io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1155)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:694)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:702)
at 
org.apache.carbondata.core.dictionary.generator.key.DictionaryMessage.readData(DictionaryMessage.java:70)
at 
org.apache.carbondata.core.dictionary.server.DictionaryServerHandler.channelRead(DictionaryServerHandler.java:59)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
ERROR 16-03 13:41:32,355 - nioEventLoopGroup-8-2 exceptionCaught
java.lang.IndexOutOfBoundsException: readerIndex(64) + length(25) exceeds 
writerIndex(80): UnpooledUnsafeDirectByteBuf(ridx: 64, widx: 80, cap: 80)
at 
io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1161)
at 
io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1155)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:694)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:702)
at 
org.apache.carbondata.core.dictionary.generator.key.DictionaryMessage.readData(DictionaryMessage.java:70)
at 
org.apache.carbondata.core.dictionary.server.DictionaryServerHandler.channelRead(DictionaryServerHandler.java:59)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.jav

[jira] [Closed] (CARBONDATA-653) Select query displays wrong data for Decimal(38,38)

2017-02-15 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta closed CARBONDATA-653.

Resolution: Fixed

> Select query displays wrong data for Decimal(38,38)
> ---
>
> Key: CARBONDATA-653
> URL: https://issues.apache.org/jira/browse/CARBONDATA-653
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6
>Reporter: Geetika Gupta
>Assignee: Rahul Kumar
>Priority: Minor
> Attachments: Screenshot from 2017-01-17 18-48-43.png, 
> testMaxDigitsAfterDecimal.csv
>
>
> I tried to load data into a table having decimal(38,38) as a column. The 
> data-load was successful, but when I displayed the data, there was some wrong 
> data present in the table.
> Below are the queries:
> create table testDecimal(a decimal(38,38), b String) stored by 'carbondata';
> LOAD DATA INPATH 
> 'hdfs://localhost:54311/testFiles/testMaxDigitsAfterDecimal.csv' into table 
> testDecimal OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='a,b');
> select * from testDecimal;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (CARBONDATA-613) Data Load is not working for Decimal(38,0) datatype

2017-02-15 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta closed CARBONDATA-613.

Resolution: Fixed

> Data Load is not working for Decimal(38,0) datatype
> ---
>
> Key: CARBONDATA-613
> URL: https://issues.apache.org/jira/browse/CARBONDATA-613
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6 and hadoop: 2.6.5
>Reporter: Geetika Gupta
>Assignee: Manohar Vanam
>Priority: Minor
> Attachments: 2000_UniqDataWithMaxNoOfDigitsBeforeDecimal.csv
>
>
> I tried to load data into a table having datatype as Decimal(38,0) but it is 
> showing DataLoad Failure.
> Create table command:
> CREATE TABLE uniqdata2 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(38,0), DECIMAL_COLUMN2 
> decimal(38,0),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB")
> Data Load command:
> LOAD DATA INPATH 
> 'hdfs://localhost:54311/testFiles/2000_UniqDataWithMaxNoOfDigitsBeforeDecimal.csv'
>  into table uniqdata2 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> I tried the same queries in hive and it works successfully.
> Here are the logs for the exception:
> ERROR 09-01 16:09:11,855 - pool-185-thread-1 Problem while writing the carbon 
> data file
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$BlockletDataHolder.get(CarbonFactDataHandlerColumnar.java:1539)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1629)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1611)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> ERROR 09-01 16:09:11,856 - [uniqdata2: Graph - 
> MDKeyGenuniqdata2][partitionID:0] 
> org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
>  For input string: "12345678901234567890123456789012345678"
> java.util.concurrent.ExecutionException: 
> org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
>  For input string: "12345678901234567890123456789012345678"
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:1117)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.finish(CarbonFactDataHandlerColumnar.java:1084)
>   at 
> org.apache.carbondata.processing.mdkeygen.MDKeyGenStep.processRow(MDKeyGenStep.java:222)
>   at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
>  For input string: "12345678901234567890123456789012345678"
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1603)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1569)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   ... 1 more
> Caused by: java.lang.NumberFormatException: For input string: 
> "12345678901234567890123456789012345678"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:592)
>   at java.lang.Long.parseLong(Long.java:631)
>   at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:595

[jira] [Updated] (CARBONDATA-688) Abnormal behaviour of double datatype when used in DICTIONARY_INCLUDE and filtering null values

2017-02-08 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta updated CARBONDATA-688:
-
Request participants:   (was: )
 Summary: Abnormal behaviour of double datatype when used in 
DICTIONARY_INCLUDE and filtering null values  (was: Select query displays wrong 
data for double datatype having null values)

> Abnormal behaviour of double datatype when used in DICTIONARY_INCLUDE and 
> filtering null values
> ---
>
> Key: CARBONDATA-688
> URL: https://issues.apache.org/jira/browse/CARBONDATA-688
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.1.0-incubating
> Environment: Spark 2.1
>Reporter: Geetika Gupta
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> I tried to create a table having double as a column and load null values into 
> that table. When I performed the select query on the table, it is displaying 
> wrong data.
> Below are the commands used:
> Create table :
> create table  Comp_VMALL_DICTIONARY_INCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt)  STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='imei,deviceInformationId,productionDate,gamePointId,Latest_DAY,contractNumber');
> Load command:
> LOAD DATA INPATH  'hdfs://localhost:54311/BabuStore/DATA/100_olap_C20.csv' 
> INTO table Comp_VMALL_DICTIONARY_INCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> Select query:
> select gamePointId  from Comp_VMALL_DICTIONARY_INCLUDE where gamePointId IS 
> NOT NULL order by gamePointId;
> select gamePointId from Comp_VMALL_DICTIONARY_INCLUDE where gamePointId is 
> NULL;
> The first select command displays null values as well and the second command 
> displays no values.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (CARBONDATA-688) Select query displays wrong data for double datatype having null values

2017-01-30 Thread Geetika Gupta (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Geetika Gupta created an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 CarbonData /  CARBONDATA-688 
 
 
 
  Select query displays wrong data for double datatype having null values  
 
 
 
 
 
 
 
 
 

Issue Type:
 
  Bug 
 
 
 

Affects Versions:
 

 1.1.0-incubating 
 
 
 

Assignee:
 

 Unassigned 
 
 
 

Attachments:
 

 100_olap_C20.csv 
 
 
 

Components:
 

 data-query 
 
 
 

Created:
 

 30/Jan/17 12:21 
 
 
 

Environment:
 

 Spark 2.1 
 
 
 

Priority:
 
  Minor 
 
 
 

Reporter:
 
 Geetika Gupta 
 
 
 
 
 
 
 
 
 
 
I tried to create a table having double as a column and load null values into that table. When I performed the select query on the table, it is displaying wrong data. Below are the commands used: 
Create table : create table Comp_VMALL_DICTIONARY_INCLUDE (imei string,deviceInformationId 

[jira] [Closed] (CARBONDATA-658) Compression is not working for BigInt and Int datatype

2017-01-26 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta closed CARBONDATA-658.

Resolution: Fixed

> Compression is not working for BigInt and Int datatype
> --
>
> Key: CARBONDATA-658
> URL: https://issues.apache.org/jira/browse/CARBONDATA-658
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6, 2.0
>Reporter: Geetika Gupta
> Attachments: 10_LargeBigInt.csv, 10_LargeInt.csv, 
> 10_SmallBigInt.csv, 10_SmallInt.csv, sample1.csv
>
>
> I tried to load data into a table having bigInt as a column. Firstly I loaded 
> small bigint values to the table and noted down the carbondata file size then 
> I loaded max bigint values to the table and again noted the carbondata file 
> size.
> For large bigint values the carbondata file size was 684.25 Kb and for small 
> bigint values it was 684.26 Kb. So I could not figure out whether compression 
> is performed or not.
> I tried the same scenario with int datatype as well. For large int values the 
> carbondata file size was 684.24 Kb and for small int values it was 684.26 Kb.
> Below are the queries:
> For BigInt table:
> Create table test(a BigInt, b String) stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeBigInt.csv' 
> into table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallBigInt.csv' 
> into table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> For Int table:
> Create table test(a Int, b String) stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeInt.csv' into 
> table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallInt.csv' into 
> table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (CARBONDATA-625) Abnormal behaviour of Int datatype

2017-01-18 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta closed CARBONDATA-625.

Resolution: Fixed

> Abnormal behaviour of Int datatype
> --
>
> Key: CARBONDATA-625
> URL: https://issues.apache.org/jira/browse/CARBONDATA-625
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: Spark: 1.6  and hadoop: 2.6.5 
>Reporter: Geetika Gupta
>Assignee: Manish Gupta
>Priority: Minor
> Attachments: Screenshot from 2017-01-11 18-36-24.png, 
> testMaxValueForBigInt.csv
>
>
> I was trying to create a table having int as a column and loaded data into 
> the table. Data loading was performed successfully but when I viewed the data 
> of the table, there was some wrong data present in the table. I was trying to 
> load BigInt data to an int column. All the data in int column is loaded with 
> the first value of the csv. Below are the details for the queries:
> create table xyz(a int, b string)stored by 'carbondata';
> Data load query:
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/testMaxValueForBigInt.csv' 
> into table xyz OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='a,b');
> select query:
> select * from xyz;
> PFA the screenshot of the output and the csv file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-658) Compression is not working for BigInt and Int datatype

2017-01-18 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta updated CARBONDATA-658:
-
Description: 
I tried to load data into a table having bigInt as a column. Firstly I loaded 
small bigint values to the table and noted down the carbondata file size then I 
loaded max bigint values to the table and again noted the carbondata file size.

For large bigint values the carbondata file size was 684.25 Kb and for small 
bigint values it was 684.26 Kb. So I could not figure out whether compression 
is performed or not.

I tried the same scenario with int datatype as well. For large int values the 
carbondata file size was 684.24 Kb and for small int values it was 684.26 Kb.

Below are the queries:
For BigInt table:

Create table test(a BigInt, b String) stored by 'carbondata';

LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeBigInt.csv' into 
table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');

LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallBigInt.csv' into 
table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');

For Int table:

Create table test(a Int, b String) stored by 'carbondata';

LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeInt.csv' into 
table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');

LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallInt.csv' into 
table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');


  was:I tried to load data into a table having bigInt as a column. Firstly I 
loaded small bigint values to the table and noted down the carbondata file size 
then I loaded max bigint values to the table and again noted the carbondata 
file size.

 Attachment: 10_SmallInt.csv
 10_LargeInt.csv
 10_SmallBigInt.csv
 10_LargeBigInt.csv
Environment: spark 1.6, 2.0  (was: spark 1.6)
Summary: Compression is not working for BigInt and Int datatype  (was: 
Compression is not working for BigInt and Int)

> Compression is not working for BigInt and Int datatype
> --
>
> Key: CARBONDATA-658
> URL: https://issues.apache.org/jira/browse/CARBONDATA-658
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6, 2.0
>Reporter: Geetika Gupta
> Attachments: 10_LargeBigInt.csv, 10_LargeInt.csv, 
> 10_SmallBigInt.csv, 10_SmallInt.csv
>
>
> I tried to load data into a table having bigInt as a column. Firstly I loaded 
> small bigint values to the table and noted down the carbondata file size then 
> I loaded max bigint values to the table and again noted the carbondata file 
> size.
> For large bigint values the carbondata file size was 684.25 Kb and for small 
> bigint values it was 684.26 Kb. So I could not figure out whether compression 
> is performed or not.
> I tried the same scenario with int datatype as well. For large int values the 
> carbondata file size was 684.24 Kb and for small int values it was 684.26 Kb.
> Below are the queries:
> For BigInt table:
> Create table test(a BigInt, b String) stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeBigInt.csv' 
> into table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallBigInt.csv' 
> into table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> For Int table:
> Create table test(a Int, b String) stored by 'carbondata';
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_LargeInt.csv' into 
> table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');
> LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/10_SmallInt.csv' into 
> table test OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='b,a');



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-658) Compression is not working for BigInt and Int

2017-01-18 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-658:


 Summary: Compression is not working for BigInt and Int
 Key: CARBONDATA-658
 URL: https://issues.apache.org/jira/browse/CARBONDATA-658
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.0.0-incubating
 Environment: spark 1.6
Reporter: Geetika Gupta


I tried to load data into a table having bigInt as a column. Firstly I loaded 
small bigint values to the table and noted down the carbondata file size then I 
loaded max bigint values to the table and again noted the carbondata file size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-653) Select query displays wrong data for Decimal(38,38)

2017-01-17 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta updated CARBONDATA-653:
-
Priority: Minor  (was: Major)

> Select query displays wrong data for Decimal(38,38)
> ---
>
> Key: CARBONDATA-653
> URL: https://issues.apache.org/jira/browse/CARBONDATA-653
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: spark 1.6
>Reporter: Geetika Gupta
>Priority: Minor
> Attachments: Screenshot from 2017-01-17 18-48-43.png, 
> testMaxDigitsAfterDecimal.csv
>
>
> I tried to load data into a table having decimal(38,38) as a column. The 
> data-load was successful, but when I displayed the data, there was some wrong 
> data present in the table.
> Below are the queries:
> create table testDecimal(a decimal(38,38), b String) stored by 'carbondata';
> LOAD DATA INPATH 
> 'hdfs://localhost:54311/testFiles/testMaxDigitsAfterDecimal.csv' into table 
> testDecimal OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='a,b');
> select * from testDecimal;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-653) Select query displays wrong data for Decimal(38,38)

2017-01-17 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-653:


 Summary: Select query displays wrong data for Decimal(38,38)
 Key: CARBONDATA-653
 URL: https://issues.apache.org/jira/browse/CARBONDATA-653
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.0.0-incubating
 Environment: spark 1.6
Reporter: Geetika Gupta
 Attachments: Screenshot from 2017-01-17 18-48-43.png, 
testMaxDigitsAfterDecimal.csv

I tried to load data into a table having decimal(38,38) as a column. The 
data-load was successful, but when I displayed the data, there was some wrong 
data present in the table.

Below are the queries:

create table testDecimal(a decimal(38,38), b String) stored by 'carbondata';

LOAD DATA INPATH 
'hdfs://localhost:54311/testFiles/testMaxDigitsAfterDecimal.csv' into table 
testDecimal OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='a,b');

select * from testDecimal;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-625) Abnormal behaviour of Int datatype

2017-01-11 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-625:


 Summary: Abnormal behaviour of Int datatype
 Key: CARBONDATA-625
 URL: https://issues.apache.org/jira/browse/CARBONDATA-625
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.0.0-incubating
 Environment: Spark: 1.6  and hadoop: 2.6.5 
Reporter: Geetika Gupta
Priority: Minor
 Attachments: Screenshot from 2017-01-11 18-36-24.png, 
testMaxValueForBigInt.csv

I was trying to create a table having int as a column and loaded data into the 
table. Data loading was performed successfully but when I viewed the data of 
the table, there was some wrong data present in the table. I was trying to load 
BigInt data to an int column. All the data in int column is loaded with the 
first value of the csv. Below are the details for the queries:

create table xyz(a int, b string)stored by 'carbondata';

Data load query:
LOAD DATA INPATH 'hdfs://localhost:54311/testFiles/testMaxValueForBigInt.csv' 
into table xyz OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='a,b');

select query:
select * from xyz;

PFA the screenshot of the output and the csv file.








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CARBONDATA-613) Data Load is not working for Decimal(38,0) datatype

2017-01-09 Thread Geetika Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geetika Gupta updated CARBONDATA-613:
-
Description: 
I tried to load data into a table having datatype as Decimal(38,0) but it is 
showing DataLoad Failure.

Create table command:
CREATE TABLE uniqdata2 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(38,0), DECIMAL_COLUMN2 
decimal(38,0),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) 
STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 
MB")

Data Load command:
LOAD DATA INPATH 
'hdfs://localhost:54311/testFiles/2000_UniqDataWithMaxNoOfDigitsBeforeDecimal.csv'
 into table uniqdata2 OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

I tried the same queries in hive and it works successfully.

Here are the logs for the exception:


ERROR 09-01 16:09:11,855 - pool-185-thread-1 Problem while writing the carbon 
data file
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$BlockletDataHolder.get(CarbonFactDataHandlerColumnar.java:1539)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1629)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR 09-01 16:09:11,856 - [uniqdata2: Graph - 
MDKeyGenuniqdata2][partitionID:0] 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
java.util.concurrent.ExecutionException: 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:1117)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.finish(CarbonFactDataHandlerColumnar.java:1084)
at 
org.apache.carbondata.processing.mdkeygen.MDKeyGenStep.processRow(MDKeyGenStep.java:222)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1603)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1569)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.lang.NumberFormatException: For input string: 
"12345678901234567890123456789012345678"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:592)
at java.lang.Long.parseLong(Long.java:631)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:595)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$700(CarbonFactDataHandlerColumnar.java:85)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1592)
... 5 more
ERROR 09-01 16:09:11,856 - [uniqdata2: Graph - 
MDKeyGenuniqdata2][partitionID:0] Failed for table: uniqdata2 in  finishing 
data handler
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
at 
org.apac

[jira] [Created] (CARBONDATA-613) Data Load is not working for Decimal(38,0) datatype

2017-01-09 Thread Geetika Gupta (JIRA)
Geetika Gupta created CARBONDATA-613:


 Summary: Data Load is not working for Decimal(38,0) datatype
 Key: CARBONDATA-613
 URL: https://issues.apache.org/jira/browse/CARBONDATA-613
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.0.0-incubating
 Environment: spark 1.6 and hadoop: 2.6.5
Reporter: Geetika Gupta
Priority: Minor
 Attachments: 2000_UniqDataWithMaxNoOfDigitsBeforeDecimal.csv

I tried to load data into a table having datatype as Decimal(38,0) but it is 
showing DataLoad Failure.

Create table command:
CREATE TABLE uniqdata2 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(38,0), DECIMAL_COLUMN2 
decimal(38,0),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) 
STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 
MB")

Data Load command:
LOAD DATA INPATH 
'hdfs://localhost:54311/testFiles/2000_UniqDataWithMaxNoOfDigitsBeforeDecimal.csv'
 into table uniqdata OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');



Here are the logs for the exception:


ERROR 09-01 16:09:11,855 - pool-185-thread-1 Problem while writing the carbon 
data file
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$BlockletDataHolder.get(CarbonFactDataHandlerColumnar.java:1539)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1629)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Consumer.call(CarbonFactDataHandlerColumnar.java:1611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR 09-01 16:09:11,856 - [uniqdata2: Graph - 
MDKeyGenuniqdata2][partitionID:0] 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
java.util.concurrent.ExecutionException: 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:1117)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.finish(CarbonFactDataHandlerColumnar.java:1084)
at 
org.apache.carbondata.processing.mdkeygen.MDKeyGenStep.processRow(MDKeyGenStep.java:222)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.carbondata.processing.store.writer.exception.CarbonDataWriterException:
 For input string: "12345678901234567890123456789012345678"
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1603)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1569)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.lang.NumberFormatException: For input string: 
"12345678901234567890123456789012345678"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:592)
at java.lang.Long.parseLong(Long.java:631)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:595)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$700(CarbonFactDataHandlerColumnar.java:85)
at 
org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:1592)
... 5 more
ERROR 09-01 16:09:11,856 - [uniqdata2: Graph - 
MDKeyGenuniqdata2][partitionID:0] Failed fo