Hi Manish,

Could not find any .lock files incarbon store.

I am getting the error while running spark-shell, did not try thrift
server. However, as you can see from attached logs, it is taking the
default store location ( not the one from carbon.properties )

scala> cc.storePath
res0: String = /home/cduser/carbon.store


Thanks,
Sanoj




On Mon, Feb 6, 2017 at 1:23 PM, manish gupta <tomanishgupt...@gmail.com>
wrote:

> Hi Sanoj,
>
> Please check if there is any file with .lock extension in the carbon store.
>
> Also when you start thrift server carbon store location will be printed in
> the thrift server logs. Please validate if there is nay mismatch in the
> store location provided by you and the store location getting printed in
> the thrift server logs.
>
> Also please provide the complete logs for failure.
>
> Regards
> Manish Gupta
>
> On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <sanoj.geo...@gmail.com>
> wrote:
>
> > Not yet resolved, still getting same error.
> >
> > On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> > carbondatacontributi...@gmail.com> wrote:
> >
> > > You mean the issue is resolved?
> > >
> > > Regards
> > > Raghunandan
> > >
> > > On 06-Feb-2017 1:36 PM, "Sanoj M George" <sanoj.geo...@gmail.com>
> wrote:
> > >
> > > Thanks Raghunandan.  Checked the thread but it seems this error is due
> to
> > > something else.
> > >
> > > Below are the parameters that I changed :
> > >
> > > **** carbon.properties :
> > > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> > > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> > > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> > >
> > > **** spark-defaults.conf  :
> > > carbon.kettle.home
> > > /home/cduser/spark/carbonlib/carbonplugins
> > > spark.driver.extraJavaOptions
> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > > spark.executor.extraJavaOptions
> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.properties
> > >
> > > Although store location is specified in carbon.properties, spark-shell
> > was
> > > using "/home/cduser/carbon.store" as store location.
> > >
> > > Regards
> > >
> > > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> > > carbondatacontributi...@gmail.com> wrote:
> > >
> > > > Dear sanoj,
> > > > Pls refer to
> > > > http://apache-carbondata-mailing-list-archive.1130556.
> > > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> > > >
> > > > Let me know if this thread didn't address your problem.
> > > >
> > > > Regards
> > > >
> > > >
> > > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <sanoj.geo...@gmail.com>
> > wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I am getting below error while trying out Carbondata with Spark
> 1.6.2 /
> > > > Hadoop 2.6.5 / Carbondata 1.
> > > >
> > > > ./bin/spark-shell --jars
> > > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> > hadoop2.2.0.jar
> > > > scala> import org.apache.spark.sql.CarbonContext
> > > > scala> val cc = new CarbonContext(sc)
> > > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string,
> > > city
> > > > string, age Int) STORED BY 'carbondata'")
> > > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO
> > > TABLE
> > > > t1")
> > > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> > > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> > > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after
> > > data
> > > > load
> > > > java.lang.RuntimeException: Table is locked for updation. Please try
> > > after
> > > > some time
> > > >         at scala.sys.package$.error(package.scala:27)
> > > >         at
> > > > org.apache.spark.sql.execution.command.LoadTable.
> > > > run(carbonTableSchema.scala:360)
> > > >         at
> > > > org.apache.spark.sql.execution.ExecutedCommand.
> > > > sideEffectResult$lzycompute(
> > > > commands.scala:58)
> > > >         at
> > > > org.apache.spark.sql.execution.ExecutedCommand.
> > sideEffectResult(commands.
> > > > scala:56)
> > > >
> > > >
> > > > I followed the docs at
> > > > https://github.com/apache/incubator-carbondata/blob/
> > > > master/docs/installation-guide.md#installing-and-
> > > > configuring-carbondata-on-
> > > > standalone-spark-cluster
> > > > <http://installation-guide.md#installing-and-configuring-
> > > carbondata-on-%0Astandalone-spark-cluster>
> > > > and
> > > > https://github.com/apache/incubator-carbondata/blob/
> > > > master/docs/quick-start-guide.md
> > > > to install carbondata.
> > > >
> > > > While creating the table, I observed below WARN msg in the log :
> > > >
> > > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS
> (TABLENAME
> > > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> > > >
> > > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for
> > > data
> > > > source provider carbondata. Persisting data source relation
> > > `default`.`t1`
> > > > into Hive metastore in Spark SQL specific format, which is NOT
> > compatible
> > > > with Hive.
> > > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> > > > dbName:default, owner:cduser, createTime:1486290870,
> lastAccessTime:0,
> > > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> > > > type:array<string>, comment:from deserializer)], location:null,
> > > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> > > > outputFormat:org.apache.hadoop.hive.ql.io.
> > HiveSequenceFileOutputFormat,
> > > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> > > > serializationLib:org.apache.hadoop.hive.serde2.
> > > > MetadataTypedColumnsetSerDe,
> > > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> > > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> > > sortCols:[],
> > > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> > > > skewedColValues:[],
> > > > skewedColValueLocationMaps:{})), partitionKeys:[],
> > > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
> > > > viewOriginalText:null, viewExpandedText:null,
> tableType:MANAGED_TABLE,
> > > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> > groupPrivileges:null,
> > > > rolePrivileges:null))
> > > >
> > > >
> > > > Appreciate any help in resolving this.
> > > >
> > > > Thanks,
> > > > Sanoj
> > > >
> > >
> > --
> > Sent from my iPhone
> >
>
cduser@Ubuntu-OptiPlex-990:~$ 
cduser@Ubuntu-OptiPlex-990:~$ cd spark
cduser@Ubuntu-OptiPlex-990:~/spark$ ./bin/spark-shell 
Warning: Ignoring non-spark config property: 
carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/cduser/spark-1.6.2-bin-hadoop2.6/carbonlib/carbondata_2.10-1.0.0-incubating-shade-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARN  06-02 16:07:02,378 - Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
INFO  06-02 16:07:02,621 - Changing view acls to: cduser
INFO  06-02 16:07:02,621 - Changing modify acls to: cduser
INFO  06-02 16:07:02,622 - SecurityManager: authentication disabled; ui acls 
disabled; users with view permissions: Set(cduser); users with modify 
permissions: Set(cduser)
INFO  06-02 16:07:02,818 - Starting HTTP Server
INFO  06-02 16:07:02,853 - jetty-8.y.z-SNAPSHOT
INFO  06-02 16:07:02,868 - Started SocketConnector@0.0.0.0:35822
INFO  06-02 16:07:02,869 - Successfully started service 'HTTP class server' on 
port 35822.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_73)
Type in expressions to have them evaluated.
Type :help for more information.
WARN  06-02 16:07:05,913 - Your hostname, Ubuntu-OptiPlex-990 resolves to a 
loopback address: 127.0.1.1; using 10.33.31.29 instead (on interface eno1)
WARN  06-02 16:07:05,913 - Set SPARK_LOCAL_IP if you need to bind to another 
address
INFO  06-02 16:07:05,922 - Running Spark version 1.6.2
WARN  06-02 16:07:05,938 - 
SPARK_CLASSPATH was detected (set to 
'/home/cduser/spark/carbonlib/*:/home/cduser/spark/jars/*').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath
        
WARN  06-02 16:07:05,940 - Setting 'spark.executor.extraClassPath' to 
'/home/cduser/spark/carbonlib/*:/home/cduser/spark/jars/*' as a work-around.
WARN  06-02 16:07:05,941 - Setting 'spark.driver.extraClassPath' to 
'/home/cduser/spark/carbonlib/*:/home/cduser/spark/jars/*' as a work-around.
INFO  06-02 16:07:05,951 - Changing view acls to: cduser
INFO  06-02 16:07:05,952 - Changing modify acls to: cduser
INFO  06-02 16:07:05,952 - SecurityManager: authentication disabled; ui acls 
disabled; users with view permissions: Set(cduser); users with modify 
permissions: Set(cduser)
INFO  06-02 16:07:06,100 - Successfully started service 'sparkDriver' on port 
40777.
INFO  06-02 16:07:06,349 - Slf4jLogger started
INFO  06-02 16:07:06,376 - Starting remoting
INFO  06-02 16:07:06,477 - Remoting started; listening on addresses 
:[akka.tcp://sparkDriverActorSystem@10.33.31.29:39468]
INFO  06-02 16:07:06,482 - Successfully started service 
'sparkDriverActorSystem' on port 39468.
INFO  06-02 16:07:06,489 - Registering MapOutputTracker
INFO  06-02 16:07:06,501 - Registering BlockManagerMaster
INFO  06-02 16:07:06,513 - Created local directory at 
/tmp/blockmgr-c5e9aa84-783a-45f3-8314-fdc5a1326799
INFO  06-02 16:07:06,517 - MemoryStore started with capacity 511.1 MB
INFO  06-02 16:07:06,568 - Registering OutputCommitCoordinator
INFO  06-02 16:07:06,652 - jetty-8.y.z-SNAPSHOT
INFO  06-02 16:07:06,662 - Started SelectChannelConnector@0.0.0.0:4040
INFO  06-02 16:07:06,663 - Successfully started service 'SparkUI' on port 4040.
INFO  06-02 16:07:06,664 - Started SparkUI at http://10.33.31.29:4040
INFO  06-02 16:07:06,753 - Starting executor ID driver on host localhost
INFO  06-02 16:07:06,760 - Using REPL class URI: http://10.33.31.29:35822
INFO  06-02 16:07:06,777 - Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 46141.
INFO  06-02 16:07:06,778 - Server created on 46141
INFO  06-02 16:07:06,779 - Trying to register BlockManager
INFO  06-02 16:07:06,781 - Registering block manager localhost:46141 with 511.1 
MB RAM, BlockManagerId(driver, localhost, 46141)
INFO  06-02 16:07:06,783 - Registered BlockManager
INFO  06-02 16:07:06,894 - Created spark context..
Spark context available as sc.
INFO  06-02 16:07:07,687 - Initializing execution hive, version 1.2.1
INFO  06-02 16:07:07,736 - Inspected Hadoop version: 2.6.0
INFO  06-02 16:07:07,737 - Loaded org.apache.hadoop.hive.shims.Hadoop23Shims 
for Hadoop version 2.6.0
INFO  06-02 16:07:07,993 - 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
INFO  06-02 16:07:08,018 - ObjectStore, initialize called
WARN  06-02 16:07:08,126 - Plugin (Bundle) "org.datanucleus.api.jdo" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar."
WARN  06-02 16:07:08,129 - Plugin (Bundle) "org.datanucleus.store.rdbms" is 
already registered. Ensure you dont have multiple JAR versions of the same 
plugin in the classpath. The URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar" 
is already registered, and you are trying to register an identical plugin 
located at URL "file:/home/cduser/spark/lib/datanucleus-rdbms-3.2.9.jar."
WARN  06-02 16:07:08,136 - Plugin (Bundle) "org.datanucleus" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-core-3.2.10.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar."
INFO  06-02 16:07:08,152 - Property hive.metastore.integral.jdo.pushdown 
unknown - will be ignored
INFO  06-02 16:07:08,152 - Property datanucleus.cache.level2 unknown - will be 
ignored
WARN  06-02 16:07:08,258 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
WARN  06-02 16:07:08,482 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
INFO  06-02 16:07:15,113 - Setting MetaStore object pin classes with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
INFO  06-02 16:07:16,085 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:16,086 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:07:21,047 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:21,047 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:07:22,405 - Using direct SQL, underlying DB is DERBY
INFO  06-02 16:07:22,407 - Initialized ObjectStore
WARN  06-02 16:07:22,667 - Version information not found in metastore. 
hive.metastore.schema.verification is not enabled so recording the schema 
version 1.2.0
WARN  06-02 16:07:22,939 - Failed to get database default, returning 
NoSuchObjectException
INFO  06-02 16:07:23,422 - Added admin role in metastore
INFO  06-02 16:07:23,430 - Added public role in metastore
INFO  06-02 16:07:23,730 - No user is added in admin role, since config is empty
INFO  06-02 16:07:23,805 - 0: get_all_databases
INFO  06-02 16:07:23,806 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_all_databases
INFO  06-02 16:07:23,818 - 0: get_functions: db=default pat=*
INFO  06-02 16:07:23,819 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_functions: db=default pat=*
INFO  06-02 16:07:23,820 - The class 
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:25,362 - Created local directory: 
/tmp/b49b5051-2a7d-46a4-8289-7b1fc144e84f_resources
INFO  06-02 16:07:25,382 - Created HDFS directory: 
/tmp/hive/cduser/b49b5051-2a7d-46a4-8289-7b1fc144e84f
INFO  06-02 16:07:25,386 - Created local directory: 
/tmp/cduser/b49b5051-2a7d-46a4-8289-7b1fc144e84f
INFO  06-02 16:07:25,397 - Created HDFS directory: 
/tmp/hive/cduser/b49b5051-2a7d-46a4-8289-7b1fc144e84f/_tmp_space.db
INFO  06-02 16:07:25,468 - default warehouse location is /user/hive/warehouse
INFO  06-02 16:07:25,475 - Initializing HiveMetastoreConnection version 1.2.1 
using Spark classes.
INFO  06-02 16:07:25,485 - Inspected Hadoop version: 2.6.0
INFO  06-02 16:07:25,503 - Loaded org.apache.hadoop.hive.shims.Hadoop23Shims 
for Hadoop version 2.6.0
INFO  06-02 16:07:25,828 - 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
INFO  06-02 16:07:25,848 - ObjectStore, initialize called
WARN  06-02 16:07:25,933 - Plugin (Bundle) "org.datanucleus.api.jdo" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar."
WARN  06-02 16:07:25,935 - Plugin (Bundle) "org.datanucleus.store.rdbms" is 
already registered. Ensure you dont have multiple JAR versions of the same 
plugin in the classpath. The URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar" 
is already registered, and you are trying to register an identical plugin 
located at URL "file:/home/cduser/spark/lib/datanucleus-rdbms-3.2.9.jar."
WARN  06-02 16:07:25,941 - Plugin (Bundle) "org.datanucleus" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-core-3.2.10.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar."
INFO  06-02 16:07:26,140 - Property hive.metastore.integral.jdo.pushdown 
unknown - will be ignored
INFO  06-02 16:07:26,140 - Property datanucleus.cache.level2 unknown - will be 
ignored
WARN  06-02 16:07:26,213 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
WARN  06-02 16:07:26,358 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
INFO  06-02 16:07:27,450 - Setting MetaStore object pin classes with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
INFO  06-02 16:07:28,002 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:28,002 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:07:28,178 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:28,178 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:07:28,252 - Reading in results for query 
"org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is 
closing
INFO  06-02 16:07:28,253 - Using direct SQL, underlying DB is DERBY
INFO  06-02 16:07:28,255 - Initialized ObjectStore
INFO  06-02 16:07:28,425 - Added admin role in metastore
INFO  06-02 16:07:28,426 - Added public role in metastore
INFO  06-02 16:07:28,467 - No user is added in admin role, since config is empty
INFO  06-02 16:07:28,533 - 0: get_all_databases
INFO  06-02 16:07:28,534 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_all_databases
INFO  06-02 16:07:28,549 - 0: get_functions: db=default pat=*
INFO  06-02 16:07:28,549 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_functions: db=default pat=*
INFO  06-02 16:07:28,551 - The class 
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:07:28,617 - Created local directory: 
/tmp/98cf7366-e5cc-4faf-9319-c17520ef9f43_resources
INFO  06-02 16:07:28,635 - Created HDFS directory: 
/tmp/hive/cduser/98cf7366-e5cc-4faf-9319-c17520ef9f43
INFO  06-02 16:07:28,639 - Created local directory: 
/tmp/cduser/98cf7366-e5cc-4faf-9319-c17520ef9f43
INFO  06-02 16:07:28,653 - Created HDFS directory: 
/tmp/hive/cduser/98cf7366-e5cc-4faf-9319-c17520ef9f43/_tmp_space.db
INFO  06-02 16:07:28,669 - Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> import org.apache.spark.sql.CarbonContext 
import org.apache.spark.sql.CarbonContext

scala> val cc = new CarbonContext(sc)
INFO  06-02 16:08:04,507 - Initializing execution hive, version 1.2.1
INFO  06-02 16:08:04,508 - Inspected Hadoop version: 2.6.0
INFO  06-02 16:08:04,508 - Loaded org.apache.hadoop.hive.shims.Hadoop23Shims 
for Hadoop version 2.6.0
INFO  06-02 16:08:04,563 - Mestastore configuration 
hive.metastore.warehouse.dir changed from 
file:/tmp/spark-ce649e3a-4084-4a97-a726-0c8391fef47b/metastore to 
file:/tmp/spark-d36e1639-85b9-4c1c-8a73-6729797e9c9e/metastore
INFO  06-02 16:08:04,563 - Mestastore configuration 
javax.jdo.option.ConnectionURL changed from 
jdbc:derby:;databaseName=/tmp/spark-ce649e3a-4084-4a97-a726-0c8391fef47b/metastore;create=true
 to 
jdbc:derby:;databaseName=/tmp/spark-d36e1639-85b9-4c1c-8a73-6729797e9c9e/metastore;create=true
INFO  06-02 16:08:04,563 - 0: Shutting down the object store...
INFO  06-02 16:08:04,563 - ugi=cduser   ip=unknown-ip-addr      cmd=Shutting 
down the object store...
INFO  06-02 16:08:04,564 - 0: Metastore shutdown complete.
INFO  06-02 16:08:04,564 - ugi=cduser   ip=unknown-ip-addr      cmd=Metastore 
shutdown complete.
INFO  06-02 16:08:04,564 - 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
INFO  06-02 16:08:04,565 - ObjectStore, initialize called
WARN  06-02 16:08:04,580 - Plugin (Bundle) "org.datanucleus.api.jdo" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar."
WARN  06-02 16:08:04,582 - Plugin (Bundle) "org.datanucleus.store.rdbms" is 
already registered. Ensure you dont have multiple JAR versions of the same 
plugin in the classpath. The URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar" 
is already registered, and you are trying to register an identical plugin 
located at URL "file:/home/cduser/spark/lib/datanucleus-rdbms-3.2.9.jar."
WARN  06-02 16:08:04,588 - Plugin (Bundle) "org.datanucleus" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-core-3.2.10.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar."
INFO  06-02 16:08:04,592 - Property hive.metastore.integral.jdo.pushdown 
unknown - will be ignored
INFO  06-02 16:08:04,593 - Property datanucleus.cache.level2 unknown - will be 
ignored
WARN  06-02 16:08:04,678 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
WARN  06-02 16:08:04,682 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
INFO  06-02 16:08:10,958 - Setting MetaStore object pin classes with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
INFO  06-02 16:08:11,399 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:08:11,399 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:08:16,607 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:08:16,607 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:08:18,081 - Using direct SQL, underlying DB is DERBY
INFO  06-02 16:08:18,082 - Initialized ObjectStore
WARN  06-02 16:08:18,086 - Failed to get database default, returning 
NoSuchObjectException
INFO  06-02 16:08:18,837 - Added admin role in metastore
INFO  06-02 16:08:18,845 - Added public role in metastore
INFO  06-02 16:08:19,311 - No user is added in admin role, since config is empty
INFO  06-02 16:08:19,325 - Created local directory: 
/tmp/21aae6e1-befd-48ed-bba7-4f6bcf29e33c_resources
INFO  06-02 16:08:19,334 - Created HDFS directory: 
/tmp/hive/cduser/21aae6e1-befd-48ed-bba7-4f6bcf29e33c
INFO  06-02 16:08:19,339 - Created local directory: 
/tmp/cduser/21aae6e1-befd-48ed-bba7-4f6bcf29e33c
INFO  06-02 16:08:19,351 - Created HDFS directory: 
/tmp/hive/cduser/21aae6e1-befd-48ed-bba7-4f6bcf29e33c/_tmp_space.db
INFO  06-02 16:08:19,405 - default warehouse location is /user/hive/warehouse
INFO  06-02 16:08:19,418 - Initializing HiveMetastoreConnection version 1.2.1 
using Spark classes.
INFO  06-02 16:08:19,425 - Inspected Hadoop version: 2.6.0
INFO  06-02 16:08:19,439 - Loaded org.apache.hadoop.hive.shims.Hadoop23Shims 
for Hadoop version 2.6.0
INFO  06-02 16:08:19,763 - 0: Opening raw store with implemenation 
class:org.apache.hadoop.hive.metastore.ObjectStore
INFO  06-02 16:08:19,786 - ObjectStore, initialize called
WARN  06-02 16:08:19,874 - Plugin (Bundle) "org.datanucleus.api.jdo" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar."
WARN  06-02 16:08:19,876 - Plugin (Bundle) "org.datanucleus.store.rdbms" is 
already registered. Ensure you dont have multiple JAR versions of the same 
plugin in the classpath. The URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar" 
is already registered, and you are trying to register an identical plugin 
located at URL "file:/home/cduser/spark/lib/datanucleus-rdbms-3.2.9.jar."
WARN  06-02 16:08:19,882 - Plugin (Bundle) "org.datanucleus" is already 
registered. Ensure you dont have multiple JAR versions of the same plugin in 
the classpath. The URL 
"file:/home/cduser/spark/lib/datanucleus-core-3.2.10.jar" is already 
registered, and you are trying to register an identical plugin located at URL 
"file:/home/cduser/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar."
INFO  06-02 16:08:19,894 - Property hive.metastore.integral.jdo.pushdown 
unknown - will be ignored
INFO  06-02 16:08:19,894 - Property datanucleus.cache.level2 unknown - will be 
ignored
WARN  06-02 16:08:19,984 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
WARN  06-02 16:08:20,128 - BoneCP specified but not present in CLASSPATH (or 
one of dependencies)
INFO  06-02 16:08:28,909 - Setting MetaStore object pin classes with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
INFO  06-02 16:08:30,035 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:08:30,036 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:08:35,929 - The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:08:35,929 - The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
INFO  06-02 16:08:37,916 - Using direct SQL, underlying DB is DERBY
INFO  06-02 16:08:37,918 - Initialized ObjectStore
WARN  06-02 16:08:38,483 - Version information not found in metastore. 
hive.metastore.schema.verification is not enabled so recording the schema 
version 1.2.0
WARN  06-02 16:08:38,824 - Failed to get database default, returning 
NoSuchObjectException
INFO  06-02 16:08:39,167 - Added admin role in metastore
INFO  06-02 16:08:39,175 - Added public role in metastore
INFO  06-02 16:08:39,567 - No user is added in admin role, since config is empty
INFO  06-02 16:08:39,652 - 0: get_all_databases
INFO  06-02 16:08:39,653 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_all_databases
INFO  06-02 16:08:39,665 - 0: get_functions: db=default pat=*
INFO  06-02 16:08:39,666 - ugi=cduser   ip=unknown-ip-addr      
cmd=get_functions: db=default pat=*
INFO  06-02 16:08:39,667 - The class 
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
"embedded-only" so does not have its own datastore table.
INFO  06-02 16:08:40,914 - Created local directory: 
/tmp/294dcaa9-e4c2-48a9-bf64-4defddf51df1_resources
INFO  06-02 16:08:40,937 - Created HDFS directory: 
/tmp/hive/cduser/294dcaa9-e4c2-48a9-bf64-4defddf51df1
INFO  06-02 16:08:40,941 - Created local directory: 
/tmp/cduser/294dcaa9-e4c2-48a9-bf64-4defddf51df1
INFO  06-02 16:08:40,963 - Created HDFS directory: 
/tmp/hive/cduser/294dcaa9-e4c2-48a9-bf64-4defddf51df1/_tmp_space.db
INFO  06-02 16:08:42,140 - main Property file path: 
/home/cduser/spark/conf/carbon.properties
INFO  06-02 16:08:42,141 - main ------Using Carbon.properties --------
INFO  06-02 16:08:42,141 - main {carbon.graph.rowset.size=100000, 
carbon.enable.quick.filter=false, carbon.number.of.cores=4, 
carbon.sort.file.buffer.size=20, 
carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins, 
carbon.number.of.cores.while.compacting=2, 
carbon.compaction.level.threshold=4,3, carbon.number.of.cores.while.loading=6, 
carbon.badRecords.location=/home/cduser/opt/Carbon/Spark/badrecords, 
carbon.sort.size=500000, carbon.inmemory.record.size=120000, 
carbon.enableXXHash=true, carbon.ddl.base.hdfs.url=home/cduser/opt/data, 
carbon.major.compaction.size=1024, 
carbon.storelocation=/home/cduser/testcarbonstore}
INFO  06-02 16:08:42,141 - main Executor start up wait time: 5
cc: org.apache.spark.sql.CarbonContext = 
org.apache.spark.sql.CarbonContext@4230d4f

scala> cc.storePath
res0: String = /home/cduser/carbon.store

scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city 
string, age Int) STORED BY 'carbondata'")
INFO  06-02 16:09:11,829 - main Query [CREATE TABLE IF NOT EXISTS T1 (ID 
STRING, NAME STRING, CITY STRING, AGE INT) STORED BY 'CARBONDATA']
INFO  06-02 16:09:11,993 - Parsing command: CREATE TABLE IF NOT EXISTS t1 (id 
string, name string, city string, age Int) STORED BY 'carbondata'
INFO  06-02 16:09:12,736 - Parse Completed
AUDIT 06-02 16:09:12,985 - [Ubuntu-OptiPlex-990][cduser][Thread-1]Creating 
Table with Database name [default] and Table name [t1]
INFO  06-02 16:09:12,994 - 0: get_tables: db=default pat=.*
INFO  06-02 16:09:12,994 - ugi=cduser   ip=unknown-ip-addr      cmd=get_tables: 
db=default pat=.*
INFO  06-02 16:09:13,025 - main Table block size not specified for default_t1. 
Therefore considering the default value 1024 MB
INFO  06-02 16:09:13,035 - Table t1 for Database default created successfully.
INFO  06-02 16:09:13,035 - main Table t1 for Database default created 
successfully.
AUDIT 06-02 16:09:13,036 - [Ubuntu-OptiPlex-990][cduser][Thread-1]Creating 
timestamp file for default.t1
INFO  06-02 16:09:13,036 - main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA 
OPTIONS (TABLENAME "DEFAULT.T1", TABLEPATH 
"/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
INFO  06-02 16:09:13,089 - 0: get_table : db=default tbl=t1
INFO  06-02 16:09:13,090 - ugi=cduser   ip=unknown-ip-addr      cmd=get_table : 
db=default tbl=t1
WARN  06-02 16:09:13,123 - Couldn't find corresponding Hive SerDe for data 
source provider carbondata. Persisting data source relation `default`.`t1` into 
Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
INFO  06-02 16:09:13,583 - 0: create_table: Table(tableName:t1, dbName:default, 
owner:cduser, createTime:1486382953, lastAccessTime:0, retention:0, 
sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, 
comment:from deserializer)], location:null, 
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe, 
parameters:{tablePath=/home/cduser/carbon.store/default/t1, 
serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{EXTERNAL=TRUE, 
spark.sql.sources.provider=carbondata}, viewOriginalText:null, 
viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
INFO  06-02 16:09:13,583 - ugi=cduser   ip=unknown-ip-addr      
cmd=create_table: Table(tableName:t1, dbName:default, owner:cduser, 
createTime:1486382953, lastAccessTime:0, retention:0, 
sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, 
comment:from deserializer)], location:null, 
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe, 
parameters:{tablePath=/home/cduser/carbon.store/default/t1, 
serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{EXTERNAL=TRUE, 
spark.sql.sources.provider=carbondata}, viewOriginalText:null, 
viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
INFO  06-02 16:09:13,630 - Updating table stats fast for t1
INFO  06-02 16:09:13,630 - Updated size of table t1 to 0
AUDIT 06-02 16:09:13,821 - [Ubuntu-OptiPlex-990][cduser][Thread-1]Table created 
with Database name [default] and Table name [t1]
res1: org.apache.spark.sql.DataFrame = []

scala> cc.sql("show tables").show
INFO  06-02 16:09:31,834 - main Query [SHOW TABLES]
INFO  06-02 16:09:31,841 - 0: get_tables: db=default pat=.*
INFO  06-02 16:09:31,841 - ugi=cduser   ip=unknown-ip-addr      cmd=get_tables: 
db=default pat=.*
INFO  06-02 16:09:31,924 - Starting job: show at <console>:31
INFO  06-02 16:09:31,935 - Got job 0 (show at <console>:31) with 1 output 
partitions
INFO  06-02 16:09:31,936 - Final stage: ResultStage 0 (show at <console>:31)
INFO  06-02 16:09:31,936 - Parents of final stage: List()
INFO  06-02 16:09:31,938 - Missing parents: List()
INFO  06-02 16:09:31,944 - Submitting ResultStage 0 (MapPartitionsRDD[3] at 
show at <console>:31), which has no missing parents
INFO  06-02 16:09:32,017 - Block broadcast_0 stored as values in memory 
(estimated size 1824.0 B, free 1824.0 B)
INFO  06-02 16:09:32,026 - Block broadcast_0_piece0 stored as bytes in memory 
(estimated size 1176.0 B, free 2.9 KB)
INFO  06-02 16:09:32,027 - Added broadcast_0_piece0 in memory on 
localhost:46141 (size: 1176.0 B, free: 511.1 MB)
INFO  06-02 16:09:32,029 - Created broadcast 0 from broadcast at 
DAGScheduler.scala:1006
INFO  06-02 16:09:32,034 - Submitting 1 missing tasks from ResultStage 0 
(MapPartitionsRDD[3] at show at <console>:31)
INFO  06-02 16:09:32,035 - Adding task set 0.0 with 1 tasks
INFO  06-02 16:09:32,067 - Starting task 0.0 in stage 0.0 (TID 0, localhost, 
partition 0,PROCESS_LOCAL, 2401 bytes)
INFO  06-02 16:09:32,074 - Running task 0.0 in stage 0.0 (TID 0)
INFO  06-02 16:09:32,096 - Finished task 0.0 in stage 0.0 (TID 0). 1256 bytes 
result sent to driver
INFO  06-02 16:09:32,101 - Finished task 0.0 in stage 0.0 (TID 0) in 51 ms on 
localhost (1/1)
INFO  06-02 16:09:32,102 - Removed TaskSet 0.0, whose tasks have all completed, 
from pool 
INFO  06-02 16:09:32,103 - ResultStage 0 (show at <console>:31) finished in 
0.061 s
INFO  06-02 16:09:32,106 - Job 0 finished: show at <console>:31, took 0.182180 s
+---------+-----------+
|tableName|isTemporary|
+---------+-----------+
|       t1|      false|
+---------+-----------+


scala> cc.sql("desc t1").show
INFO  06-02 16:09:41,872 - main Query [DESC T1]
INFO  06-02 16:09:41,879 - 0: get_table : db=default tbl=t1
INFO  06-02 16:09:41,879 - ugi=cduser   ip=unknown-ip-addr      cmd=get_table : 
db=default tbl=t1
INFO  06-02 16:09:42,385 - 0: get_table : db=default tbl=t1
INFO  06-02 16:09:42,385 - ugi=cduser   ip=unknown-ip-addr      cmd=get_table : 
db=default tbl=t1
INFO  06-02 16:09:42,440 - main Starting to optimize plan
INFO  06-02 16:09:42,469 - Starting job: show at <console>:31
INFO  06-02 16:09:42,470 - Got job 1 (show at <console>:31) with 1 output 
partitions
INFO  06-02 16:09:42,470 - Final stage: ResultStage 1 (show at <console>:31)
INFO  06-02 16:09:42,470 - Parents of final stage: List()
INFO  06-02 16:09:42,470 - Missing parents: List()
INFO  06-02 16:09:42,470 - Submitting ResultStage 1 (MapPartitionsRDD[5] at 
show at <console>:31), which has no missing parents
INFO  06-02 16:09:42,471 - Block broadcast_1 stored as values in memory 
(estimated size 1824.0 B, free 4.7 KB)
INFO  06-02 16:09:42,473 - Block broadcast_1_piece0 stored as bytes in memory 
(estimated size 1175.0 B, free 5.9 KB)
INFO  06-02 16:09:42,473 - Added broadcast_1_piece0 in memory on 
localhost:46141 (size: 1175.0 B, free: 511.1 MB)
INFO  06-02 16:09:42,474 - Created broadcast 1 from broadcast at 
DAGScheduler.scala:1006
INFO  06-02 16:09:42,474 - Submitting 1 missing tasks from ResultStage 1 
(MapPartitionsRDD[5] at show at <console>:31)
INFO  06-02 16:09:42,474 - Adding task set 1.0 with 1 tasks
INFO  06-02 16:09:42,475 - Starting task 0.0 in stage 1.0 (TID 1, localhost, 
partition 0,PROCESS_LOCAL, 2584 bytes)
INFO  06-02 16:09:42,476 - Running task 0.0 in stage 1.0 (TID 1)
INFO  06-02 16:09:42,478 - Finished task 0.0 in stage 1.0 (TID 1). 1439 bytes 
result sent to driver
INFO  06-02 16:09:42,479 - Finished task 0.0 in stage 1.0 (TID 1) in 4 ms on 
localhost (1/1)
INFO  06-02 16:09:42,479 - ResultStage 1 (show at <console>:31) finished in 
0.005 s
INFO  06-02 16:09:42,480 - Job 1 finished: show at <console>:31, took 0.010462 s
+--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
|      id|   string|       |
|    name|   string|       |
|    city|   string|       |
|     age|   bigint|       |
+--------+---------+-------+


scala> INFO  06-02 16:09:42,481 - Removed TaskSet 1.0, whose tasks have all 
completed, from pool 


scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE t1")
INFO  06-02 16:10:11,688 - main Query [LOAD DATA INPATH 
'/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
INFO  06-02 16:10:26,712 - Table MetaData Unlocked Successfully after data load
java.lang.RuntimeException: Table is locked for updation. Please try after some 
time
        at scala.sys.package$.error(package.scala:27)
        at 
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:360)
        at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
        at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
        at 
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
        at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:139)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
        at $iwC$$iwC$$iwC.<init>(<console>:44)
        at $iwC$$iwC.<init>(<console>:46)
        at $iwC.<init>(<console>:48)
        at <init>(<console>:50)
        at .<init>(<console>:54)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at 
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at 
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
        at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at 
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


scala> 

Reply via email to