[GitHub] [carbondata] jackylk commented on a change in pull request #3540: [CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV flow with binary non-sort columns

2019-12-29 Thread GitBox
jackylk commented on a change in pull request #3540: 
[CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV 
flow with binary non-sort columns
URL: https://github.com/apache/carbondata/pull/3540#discussion_r361846633
 
 

 ##
 File path: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/binary/TestBinaryDataType.scala
 ##
 @@ -112,6 +112,48 @@ class TestBinaryDataType extends QueryTest with 
BeforeAndAfterAll {
 }
 }
 
+test("Create table and load data with binary column with other global sort 
columns") {
+sql("DROP TABLE IF EXISTS binaryTable")
+sql(
+s"""
+   | CREATE TABLE IF NOT EXISTS binaryTable (
+   |id int,
+   |label boolean,
+   |name string,
+   |binaryField binary,
+   |autoLabel boolean)
+   | STORED BY 'carbondata'
 
 Review comment:
   use STORED AS CARBONDATA


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] jackylk commented on a change in pull request #3540: [CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV flow with binary non-sort columns

2019-12-29 Thread GitBox
jackylk commented on a change in pull request #3540: 
[CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV 
flow with binary non-sort columns
URL: https://github.com/apache/carbondata/pull/3540#discussion_r361846507
 
 

 ##
 File path: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessorStepOnSpark.scala
 ##
 @@ -97,6 +97,36 @@ object DataLoadProcessorStepOnSpark {
 }
   }
 
+  def inputFuncForCsvRows(
+  rows: Iterator[StringArrayRow],
+  index: Int,
+  modelBroadcast: Broadcast[CarbonLoadModel],
+  rowCounter: Accumulator[Int]): Iterator[CarbonRow] = {
+val model: CarbonLoadModel = 
modelBroadcast.value.getCopyWithTaskNo(index.toString)
+val conf = DataLoadProcessBuilder.createConfiguration(model)
+val rowParser = new RowParserImpl(conf.getDataFields, conf)
+val isRawDataRequired = CarbonDataProcessorUtil.isRawDataRequired(conf)
+TaskContext.get().addTaskFailureListener { (t: TaskContext, e: Throwable) 
=>
+  wrapException(e, model)
+}
+
+new Iterator[CarbonRow] {
+  override def hasNext: Boolean = rows.hasNext
+
+  override def next(): CarbonRow = {
+var row : CarbonRow = null
+val rawRow = rows.next().values.asInstanceOf[Array[Object]]
+if(isRawDataRequired) {
 
 Review comment:
   ```suggestion
   val row = if (isRawDataRequired) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] jackylk commented on a change in pull request #3540: [CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV flow with binary non-sort columns

2019-12-29 Thread GitBox
jackylk commented on a change in pull request #3540: 
[CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV 
flow with binary non-sort columns
URL: https://github.com/apache/carbondata/pull/3540#discussion_r361846507
 
 

 ##
 File path: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessorStepOnSpark.scala
 ##
 @@ -97,6 +97,36 @@ object DataLoadProcessorStepOnSpark {
 }
   }
 
+  def inputFuncForCsvRows(
+  rows: Iterator[StringArrayRow],
+  index: Int,
+  modelBroadcast: Broadcast[CarbonLoadModel],
+  rowCounter: Accumulator[Int]): Iterator[CarbonRow] = {
+val model: CarbonLoadModel = 
modelBroadcast.value.getCopyWithTaskNo(index.toString)
+val conf = DataLoadProcessBuilder.createConfiguration(model)
+val rowParser = new RowParserImpl(conf.getDataFields, conf)
+val isRawDataRequired = CarbonDataProcessorUtil.isRawDataRequired(conf)
+TaskContext.get().addTaskFailureListener { (t: TaskContext, e: Throwable) 
=>
+  wrapException(e, model)
+}
+
+new Iterator[CarbonRow] {
+  override def hasNext: Boolean = rows.hasNext
+
+  override def next(): CarbonRow = {
+var row : CarbonRow = null
+val rawRow = rows.next().values.asInstanceOf[Array[Object]]
+if(isRawDataRequired) {
 
 Review comment:
   ```suggestion
   if (isRawDataRequired) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [carbondata] jackylk commented on a change in pull request #3540: [CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV flow with binary non-sort columns

2019-12-29 Thread GitBox
jackylk commented on a change in pull request #3540: 
[CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV 
flow with binary non-sort columns
URL: https://github.com/apache/carbondata/pull/3540#discussion_r361846360
 
 

 ##
 File path: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
 ##
 @@ -88,11 +90,24 @@ object DataLoadProcessBuilderOnSpark {
 
 val conf = SparkSQLUtil.broadCastHadoopConf(sc, hadoopConf)
 // 1. Input
-val inputRDD = originRDD
-  .mapPartitions(rows => DataLoadProcessorStepOnSpark.toRDDIterator(rows, 
modelBroadcast))
-  .mapPartitionsWithIndex { case (index, rows) =>
-DataLoadProcessorStepOnSpark.inputFunc(rows, index, modelBroadcast, 
inputStepRowCounter)
+val inputRDD = if (isLoadFromCSV) {
+  // No need of wrap with NewRDDIterator, which converts object to string,
+  // as it is already a string.
+  // So, this will avoid new object creation in case of CSV global sort 
load for each row
+  originRDD.mapPartitionsWithIndex { case (index, rows) => 
DataLoadProcessorStepOnSpark
 
 Review comment:
   Move DataLoadProcessorStepOnSpark to next line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services