[jira] [Updated] (CARBONDATA-1115) load csv data fail

2017-09-18 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-1115:

Fix Version/s: (was: 1.2.0)
   1.3.0

> load csv data fail
> --
>
> Key: CARBONDATA-1115
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1115
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.2.0
> Environment: centos 7, spark2.1.0, hadoop 2.7
>Reporter: hyd
> Fix For: 1.3.0
>
>
> is it a bug, or my environment has problem, can anyone help me.
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ls /home/carbondata/sample.csv 
> /home/carbondata/sample.csv
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ./bin/spark-shell --master 
> spark://192.168.32.114:7077 --total-executor-cores 2 --executor-memory 2G
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/carbonlib/carbondata_2.11-1.1.0-shade-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 17/06/01 14:44:54 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 17/06/01 14:44:54 WARN SparkConf: 
> SPARK_CLASSPATH was detected (set to './carbonlib/*').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --driver-class-path to augment the driver classpath
>  - spark.executor.extraClassPath to augment the executor classpath
> 
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.executor.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.driver.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN Utils: Your hostname, localhost.localdomain resolves 
> to a loopback address: 127.0.0.1; using 192.168.32.114 instead (on interface 
> em1)
> 17/06/01 14:44:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 17/06/01 14:44:59 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> Spark context Web UI available at http://192.168.32.114:4040
> Spark context available as 'sc' (master = spark://192.168.32.114:7077, app id 
> = app-20170601144454-0001).
> Spark session available as 'spark'.
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.1.0
>   /_/
>  
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.SparkSession
> scala> import org.apache.spark.sql.CarbonSession._
> import org.apache.spark.sql.CarbonSession._
> scala> val carbon = 
> SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://192.168.32.114/test")
> 17/06/01 14:45:35 WARN SparkContext: Using an existing SparkContext; some 
> configuration may not take effect.
> 17/06/01 14:45:38 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> carbon: org.apache.spark.sql.SparkSession = 
> org.apache.spark.sql.CarbonSession@2165b170
> scala> carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name 
> string, city string, age Int) STORED BY 'carbondata'")
> 17/06/01 14:45:45 AUDIT CreateTable: 
> [localhost.localdomain][root][Thread-1]Creating Table with Database name 
> [default] and Table name [test_table]
> res0: org.apache.spark.sql.DataFrame = []
> scala> carbon.sql("LOAD DATA LOCAL INPATH '/home/carbondata/sample.csv' INTO 
> TABLE test_table")
> 17/06/01 14:45:54 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 192.168.32.114, executor 0): java.lang.ClassCastException: cannot assign 
> instance of scala.collection.immutable.List$SerializationProxy to field 
> org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
> scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
>   at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>   at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>   at 
> java.io.O

[jira] [Updated] (CARBONDATA-1115) load csv data fail

2018-02-03 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-1115:

Fix Version/s: (was: 1.3.0)
   1.4.0

> load csv data fail
> --
>
> Key: CARBONDATA-1115
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1115
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.2.0
> Environment: centos 7, spark2.1.0, hadoop 2.7
>Reporter: hyd
>Priority: Major
> Fix For: 1.4.0
>
>
> is it a bug, or my environment has problem, can anyone help me.
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ls /home/carbondata/sample.csv 
> /home/carbondata/sample.csv
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ./bin/spark-shell --master 
> spark://192.168.32.114:7077 --total-executor-cores 2 --executor-memory 2G
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/carbonlib/carbondata_2.11-1.1.0-shade-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 17/06/01 14:44:54 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 17/06/01 14:44:54 WARN SparkConf: 
> SPARK_CLASSPATH was detected (set to './carbonlib/*').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --driver-class-path to augment the driver classpath
>  - spark.executor.extraClassPath to augment the executor classpath
> 
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.executor.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.driver.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN Utils: Your hostname, localhost.localdomain resolves 
> to a loopback address: 127.0.0.1; using 192.168.32.114 instead (on interface 
> em1)
> 17/06/01 14:44:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 17/06/01 14:44:59 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> Spark context Web UI available at http://192.168.32.114:4040
> Spark context available as 'sc' (master = spark://192.168.32.114:7077, app id 
> = app-20170601144454-0001).
> Spark session available as 'spark'.
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.1.0
>   /_/
>  
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.SparkSession
> scala> import org.apache.spark.sql.CarbonSession._
> import org.apache.spark.sql.CarbonSession._
> scala> val carbon = 
> SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://192.168.32.114/test")
> 17/06/01 14:45:35 WARN SparkContext: Using an existing SparkContext; some 
> configuration may not take effect.
> 17/06/01 14:45:38 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> carbon: org.apache.spark.sql.SparkSession = 
> org.apache.spark.sql.CarbonSession@2165b170
> scala> carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name 
> string, city string, age Int) STORED BY 'carbondata'")
> 17/06/01 14:45:45 AUDIT CreateTable: 
> [localhost.localdomain][root][Thread-1]Creating Table with Database name 
> [default] and Table name [test_table]
> res0: org.apache.spark.sql.DataFrame = []
> scala> carbon.sql("LOAD DATA LOCAL INPATH '/home/carbondata/sample.csv' INTO 
> TABLE test_table")
> 17/06/01 14:45:54 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 192.168.32.114, executor 0): java.lang.ClassCastException: cannot assign 
> instance of scala.collection.immutable.List$SerializationProxy to field 
> org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
> scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
>   at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>   at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:

[jira] [Updated] (CARBONDATA-1115) load csv data fail

2018-05-23 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-1115:

Fix Version/s: (was: 1.4.0)

> load csv data fail
> --
>
> Key: CARBONDATA-1115
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1115
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.2.0
> Environment: centos 7, spark2.1.0, hadoop 2.7
>Reporter: hyd
>Priority: Major
>
> is it a bug, or my environment has problem, can anyone help me.
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ls /home/carbondata/sample.csv 
> /home/carbondata/sample.csv
> [root@localhost spark-2.1.0-bin-hadoop2.7]# ./bin/spark-shell --master 
> spark://192.168.32.114:7077 --total-executor-cores 2 --executor-memory 2G
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/carbonlib/carbondata_2.11-1.1.0-shade-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/spark-2.1.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 17/06/01 14:44:54 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 17/06/01 14:44:54 WARN SparkConf: 
> SPARK_CLASSPATH was detected (set to './carbonlib/*').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --driver-class-path to augment the driver classpath
>  - spark.executor.extraClassPath to augment the executor classpath
> 
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.executor.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN SparkConf: Setting 'spark.driver.extraClassPath' to 
> './carbonlib/*' as a work-around.
> 17/06/01 14:44:54 WARN Utils: Your hostname, localhost.localdomain resolves 
> to a loopback address: 127.0.0.1; using 192.168.32.114 instead (on interface 
> em1)
> 17/06/01 14:44:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 17/06/01 14:44:59 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> Spark context Web UI available at http://192.168.32.114:4040
> Spark context available as 'sc' (master = spark://192.168.32.114:7077, app id 
> = app-20170601144454-0001).
> Spark session available as 'spark'.
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.1.0
>   /_/
>  
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.SparkSession
> scala> import org.apache.spark.sql.CarbonSession._
> import org.apache.spark.sql.CarbonSession._
> scala> val carbon = 
> SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://192.168.32.114/test")
> 17/06/01 14:45:35 WARN SparkContext: Using an existing SparkContext; some 
> configuration may not take effect.
> 17/06/01 14:45:38 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> carbon: org.apache.spark.sql.SparkSession = 
> org.apache.spark.sql.CarbonSession@2165b170
> scala> carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name 
> string, city string, age Int) STORED BY 'carbondata'")
> 17/06/01 14:45:45 AUDIT CreateTable: 
> [localhost.localdomain][root][Thread-1]Creating Table with Database name 
> [default] and Table name [test_table]
> res0: org.apache.spark.sql.DataFrame = []
> scala> carbon.sql("LOAD DATA LOCAL INPATH '/home/carbondata/sample.csv' INTO 
> TABLE test_table")
> 17/06/01 14:45:54 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 192.168.32.114, executor 0): java.lang.ClassCastException: cannot assign 
> instance of scala.collection.immutable.List$SerializationProxy to field 
> org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
> scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
>   at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>   at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>   at 
> java.io.ObjectInputStream.defaultRea