[jira] [Commented] (CARBONDATA-995) Incorrect result displays while using variance aggregate function in presto integration

2017-11-23 Thread Vandana Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16264256#comment-16264256
 ] 

Vandana Yadav commented on CARBONDATA-995:
--

while operating the same query on hive it results differently 
1)Create table:
hive> CREATE TABLE uniqdata_h (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

2)Load data:
hive> load data local inpath 
'/home/knoldus/Desktop/csv/TestData/Data/uniqdata/2000_UniqData.csv' into table 
uniqdata_h

3)Execute query:
hive> select variance(DECIMAL_COLUMN1) as a   from (select DECIMAL_COLUMN1 from 
UNIQDATA_h order by DECIMAL_COLUMN1) t;
Query ID = knoldus_20171123174059_cdc24e03-f8b1-41d5-b496-3fa3acbc4608
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Job running in-process (local Hadoop)
2017-11-23 17:41:00,945 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_local1774409020_0004
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 3009784 HDFS Write: 752446 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
333665.7302720188
Time taken: 1.512 seconds, Fetched: 1 row(s)


> Incorrect result displays while using variance aggregate function in presto 
> integration
> ---
>
> Key: CARBONDATA-995
> URL: https://issues.apache.org/jira/browse/CARBONDATA-995
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query, presto-integration
>Affects Versions: 1.1.0
> Environment: spark 2.1 , presto 0.166
>Reporter: Vandana Yadav
>Priority: Minor
> Attachments: 2000_UniqData.csv
>
>
> Incorrect result displays while using variance aggregate function in presto 
> integration
> Steps to reproduce :
> 1. In CarbonData:
> a) Create table:
> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> b) Load data : 
> LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into table 
> uniqdata OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> 2. In presto 
> a) Execute the query:
> select variance(DECIMAL_COLUMN1) as a   from (select DECIMAL_COLUMN1 from 
> UNIQDATA order by DECIMAL_COLUMN1) t
> Actual result :
> In CarbonData :
> "++--+
> | a  |
> ++--+
> | 333832.4983039884  |
> ++--+
> 1 row selected (0.695 seconds)
> "
> in presto:
> " a 
> ---
>  333832.3010442859 
> (1 row)
> Query 20170420_082837_00062_hd7jy, FINISHED, 1 node
> Splits: 35 total, 35 done (100.00%)
> 0:00 [2.01K rows, 1.97KB] [8.09K rows/s, 7.91KB/s]"
> Expected result: it should display the same result as showing in CarbonData.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-995) Incorrect result displays while using variance aggregate function in presto integration

2017-06-29 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16068075#comment-16068075
 ] 

chenerlu commented on CARBONDATA-995:
-

Hi, What is behave of same operation in hive ?

> Incorrect result displays while using variance aggregate function in presto 
> integration
> ---
>
> Key: CARBONDATA-995
> URL: https://issues.apache.org/jira/browse/CARBONDATA-995
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query, presto-integration
>Affects Versions: 1.1.0
> Environment: spark 2.1 , presto 0.166
>Reporter: Vandana Yadav
>Priority: Minor
> Attachments: 2000_UniqData.csv
>
>
> Incorrect result displays while using variance aggregate function in presto 
> integration
> Steps to reproduce :
> 1. In CarbonData:
> a) Create table:
> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> b) Load data : 
> LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into table 
> uniqdata OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> 2. In presto 
> a) Execute the query:
> select variance(DECIMAL_COLUMN1) as a   from (select DECIMAL_COLUMN1 from 
> UNIQDATA order by DECIMAL_COLUMN1) t
> Actual result :
> In CarbonData :
> "++--+
> | a  |
> ++--+
> | 333832.4983039884  |
> ++--+
> 1 row selected (0.695 seconds)
> "
> in presto:
> " a 
> ---
>  333832.3010442859 
> (1 row)
> Query 20170420_082837_00062_hd7jy, FINISHED, 1 node
> Splits: 35 total, 35 done (100.00%)
> 0:00 [2.01K rows, 1.97KB] [8.09K rows/s, 7.91KB/s]"
> Expected result: it should display the same result as showing in CarbonData.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)