[GitHub] carbondata issue #1310: [CARBONDATA-1442] Refactored Partition-Guide.md

2017-09-03 Thread sgururajshetty
Github user sgururajshetty commented on the issue:

https://github.com/apache/carbondata/pull/1310
  
LGTM
@chenliang613  kindly review


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1315: [CARBONDATA-1431] Fixed the parser for including tim...

2017-09-03 Thread PallaviSingh1992
Github user PallaviSingh1992 commented on the issue:

https://github.com/apache/carbondata/pull/1315
  
retest this please



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1315: [CARBONDATA-1431] Fixed the parser for including tim...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1315
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/495/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1284: [CARBONDATA-1330] Not able to drop temporary table b...

2017-09-03 Thread PannerselvamVelmyl
Github user PannerselvamVelmyl commented on the issue:

https://github.com/apache/carbondata/pull/1284
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1310: [CARBONDATA-1442] Refactored Partition-Guide.md

2017-09-03 Thread Ayushi93
Github user Ayushi93 commented on the issue:

https://github.com/apache/carbondata/pull/1310
  
LGTM
@chenliang613 kindly review 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1317) Multiple dictionary files being created in single_pass

2017-09-03 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1317.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Multiple dictionary files being created in single_pass
> --
>
> Key: CARBONDATA-1317
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1317
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
> Fix For: 1.2.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:-
> 1. Create table 
> 2. Load 2 times
> 3. Drop table 
> 4. Create table with same name
> 5. Load 2 times



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1097: [CARBONDATA-1317] Multiple dictionary files being cr...

2017-09-03 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1097
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1310: [CARBONDATA-1442] Refactored Partition-Guide.md

2017-09-03 Thread PallaviSingh1992
Github user PallaviSingh1992 commented on the issue:

https://github.com/apache/carbondata/pull/1310
  
@sgururajshetty @chenliang613 please review this


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-1431) Dictionary_Include working incorrectly for date and timestamp data type.

2017-09-03 Thread Pallavi Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pallavi Singh reassigned CARBONDATA-1431:
-

Assignee: Pallavi Singh

> Dictionary_Include working incorrectly for date and timestamp data type.
> 
>
> Key: CARBONDATA-1431
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1431
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql, test
>Affects Versions: 1.2.0
>Reporter: Sangeeta Gulia
>Assignee: Pallavi Singh
>Priority: Minor
> Fix For: 1.2.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we create a table with date and timestamp data type with 
> DICTIONARY_INCLUDE : 
> Example : 
> CREATE TABLE uniqdata_INCLUDEDICTIONARY2 (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2')
> It should either create the dictionary for date and timestamp field or it 
> should throw an error that "DICTIONARY_INCLUDE" feature is not supported for 
> date and timestamp.
> whereas in the current master branch,  the query executed successfully 
> without throwing any error and neither it created dictionary files for date 
> and timestamp field.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1315: [CARBONDATA-1431] Fixed the parser for includ...

2017-09-03 Thread PallaviSingh1992
GitHub user PallaviSingh1992 opened a pull request:

https://github.com/apache/carbondata/pull/1315

[CARBONDATA-1431] Fixed the parser for including time stamp and date field 
in dictionary include



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PallaviSingh1992/carbondata CARBONDATA-1431

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1315.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1315


commit e9537d7bd7e76211066f4f9455fe23e8be5d73ee
Author: PallaviSingh1992 
Date:   2017-09-04T04:50:43Z

fixed the parser for including time and date field in dictionary include




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1265: [CARBONDATA-1128] Add encoding for non-dictionary di...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1265
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/494/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1265: [CARBONDATA-1128] Add encoding for non-dictionary di...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1265
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/493/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1445) if 'carbon.update.persist.enable'='false', it will fail to update data

2017-09-03 Thread Zhichao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichao  Zhang updated CARBONDATA-1445:
---
Description: 
When updating data, if set 'carbon.update.persist.enable'='false', it will fail.
I debug code and find that in the method LoadTable.processData the 
'dataFrameWithTupleId' will call udf 'getTupleId()' which is defined in 
CarbonEnv.init(): 'sparkSession.udf.register("getTupleId", () => "")', it will 
return blank string to 'CarbonUpdateUtil.getRequiredFieldFromTID', so 
ArrayIndexOutOfBoundsException occur.

*the plans (logical and physical) for dataFrameWithTupleId :*
== Parsed Logical Plan ==
'Project [unresolvedalias('stringField3, None), unresolvedalias('intField, 
None), unresolvedalias('longField, None), unresolvedalias('int2Field, None), 
unresolvedalias('stringfield1-updatedColumn, None), 
unresolvedalias('stringfield2-updatedColumn, None), UDF('tupleId) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Analyzed Logical Plan ==
stringField3: string, intField: int, longField: bigint, int2Field: int, 
stringfield1-updatedColumn: string, stringfield2-updatedColumn: string, segId: 
string
Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
stringfield1-updatedColumn#263, stringfield2-updatedColumn#264, 
UDF(tupleId#262) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Optimized Logical Plan ==
CarbonDictionaryCatalystDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
stringField3#113, intField#114 -> intField#114),CarbonDatasourceHadoopRelation 
[ Database name :default, Table name :study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ])], 
ExcludeProfile(ArrayBuffer(stringField2#112, stringField1#111)), 
CarbonAliasDecoderRelation(), true
+- Project [stringField3#113, intField#114, longField#115, int2Field#116, 
concat(stringField1#111, _test) AS stringfield1-updatedColumn#263, 
concat(stringField2#112, _test) AS stringfield2-updatedColumn#264, 
UDF(UDF:getTupleId()) AS segId#286]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Physical Plan ==
*CarbonDictionaryDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
stringField3#113, intField#114 -> intField#114),CarbonDatasourceHadoopRelation 
[ Database name :default, Table name :study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 

[jira] [Updated] (CARBONDATA-1445) if 'carbon.update.persist.enable'='false', it will fail to update data

2017-09-03 Thread Zhichao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichao  Zhang updated CARBONDATA-1445:
---
Description: 
When updating data, if set 'carbon.update.persist.enable'='false', it will fail.
I debug code and find that in the method LoadTable.processData the 
'dataFrameWithTupleId' will call udf 'getTupleId()' which is defined in 
CarbonEnv.init(): 'sparkSession.udf.register("getTupleId", () => "")', it will 
return blank string to 'CarbonUpdateUtil.getRequiredFieldFromTID', so 
ArrayIndexOutOfBoundsException occur.

*the plans (logical and physical) for dataFrameWithTupleId :*
== Parsed Logical Plan ==
'Project [unresolvedalias('stringField3, None), unresolvedalias('intField, 
None), unresolvedalias('longField, None), unresolvedalias('int2Field, None), 
unresolvedalias('stringfield1-updatedColumn, None), 
unresolvedalias('stringfield2-updatedColumn, None), UDF('tupleId) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Analyzed Logical Plan ==
stringField3: string, intField: int, longField: bigint, int2Field: int, 
stringfield1-updatedColumn: string, stringfield2-updatedColumn: string, segId: 
string
Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
stringfield1-updatedColumn#263, stringfield2-updatedColumn#264, 
UDF(tupleId#262) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Optimized Logical Plan ==
CarbonDictionaryCatalystDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
stringField3#113, intField#114 -> intField#114),CarbonDatasourceHadoopRelation 
[ Database name :default, Table name :study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ])], 
ExcludeProfile(ArrayBuffer(stringField2#112, stringField1#111)), 
CarbonAliasDecoderRelation(), true
+- Project [stringField3#113, intField#114, longField#115, int2Field#116, 
concat(stringField1#111, _test) AS stringfield1-updatedColumn#263, 
concat(stringField2#112, _test) AS stringfield2-updatedColumn#264, 
UDF(UDF:getTupleId()) AS segId#286]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Physical Plan ==
*CarbonDictionaryDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
stringField3#113, intField#114 -> intField#114),CarbonDatasourceHadoopRelation 
[ Database name :default, Table name :study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 

[jira] [Created] (CARBONDATA-1445) if 'carbon.update.persist.enable'='false', it will fail to update data

2017-09-03 Thread Zhichao Zhang (JIRA)
Zhichao  Zhang created CARBONDATA-1445:
--

 Summary: if 'carbon.update.persist.enable'='false', it will fail 
to update data 
 Key: CARBONDATA-1445
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1445
 Project: CarbonData
  Issue Type: Bug
  Components: data-load, spark-integration, sql
Affects Versions: 1.2.0
 Environment: CarbonData master branch, Spark 2.1.1
Reporter: Zhichao  Zhang
Priority: Minor


When updating data, if set 'carbon.update.persist.enable'='false', it will fail.
I debug code and find that in the method LoadTable.processData the 
'dataFrameWithTupleId' will call udf 'getTupleId()' which is defined in 
CarbonEnv.init(): 'sparkSession.udf.register("getTupleId", () => "")', it will 
return blank string to 'CarbonUpdateUtil.getRequiredFieldFromTID', so 
ArrayIndexOutOfBoundsException occur.

the plans (logical and physical) for dataFrameWithTupleId :
== Parsed Logical Plan ==
'Project [unresolvedalias('stringField3, None), unresolvedalias('intField, 
None), unresolvedalias('longField, None), unresolvedalias('int2Field, None), 
unresolvedalias('stringfield1-updatedColumn, None), 
unresolvedalias('stringfield2-updatedColumn, None), UDF('tupleId) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Analyzed Logical Plan ==
stringField3: string, intField: int, longField: bigint, int2Field: int, 
stringfield1-updatedColumn: string, stringfield2-updatedColumn: string, segId: 
string
Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
stringfield1-updatedColumn#263, stringfield2-updatedColumn#264, 
UDF(tupleId#262) AS segId#286]
+- Project [stringField3#113, intField#114, longField#115L, int2Field#116, 
UDF:getTupleId() AS tupleId#262, concat(stringField1#111, _test) AS 
stringfield1-updatedColumn#263, concat(stringField2#112, _test) AS 
stringfield2-updatedColumn#264]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Optimized Logical Plan ==
CarbonDictionaryCatalystDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> longField#115L, stringField2#112 -> 
stringField2#112, stringField1#111 -> stringField1#111, stringField3#113 -> 
stringField3#113, intField#114 -> intField#114),CarbonDatasourceHadoopRelation 
[ Database name :default, Table name :study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ])], 
ExcludeProfile(ArrayBuffer(stringField2#112, stringField1#111)), 
CarbonAliasDecoderRelation(), true
+- Project [stringField3#113, intField#114, longField#115, int2Field#116, 
concat(stringField1#111, _test) AS stringfield1-updatedColumn#263, 
concat(stringField2#112, _test) AS stringfield2-updatedColumn#264, 
UDF(UDF:getTupleId()) AS segId#286]
   +- Filter (isnotnull(stringField3#113) && (stringField3#113 = 1))
  +- 
Relation[stringField1#111,stringField2#112,stringField3#113,intField#114,longField#115L,int2Field#116]
 CarbonDatasourceHadoopRelation [ Database name :default, Table name 
:study_carbondata, Schema 
:Some(StructType(StructField(stringField1,StringType,true), 
StructField(stringField2,StringType,true), 
StructField(stringField3,StringType,true), 
StructField(intField,IntegerType,true), StructField(longField,LongType,true), 
StructField(int2Field,IntegerType,true))) ]

== Physical Plan ==
*CarbonDictionaryDecoder [CarbonDecoderRelation(Map(int2Field#116 -> 
int2Field#116, longField#115L -> 

[GitHub] carbondata issue #985: [CARBONDATA-1090] added integration test cases for al...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/985
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/492/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #985: [CARBONDATA-1090] added integration test cases for al...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/985
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/491/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1097: [CARBONDATA-1317] Multiple dictionary files being cr...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1097
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/490/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1166: [CARBONDATA-1305] Add limit for external dictionary ...

2017-09-03 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1166
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/489/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1097: [CARBONDATA-1317] Multiple dictionary files being cr...

2017-09-03 Thread kunal642
Github user kunal642 commented on the issue:

https://github.com/apache/carbondata/pull/1097
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1166: [CARBONDATA-1305] Add limit for external dictionary ...

2017-09-03 Thread kunal642
Github user kunal642 commented on the issue:

https://github.com/apache/carbondata/pull/1166
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---