[ 
https://issues.apache.org/jira/browse/CARBONDATA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Li resolved CARBONDATA-2508.
----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.1

> There are some errors when I running SearchModeExample
> ------------------------------------------------------
>
>                 Key: CARBONDATA-2508
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2508
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: xubo245
>            Assignee: xubo245
>            Priority: Major
>             Fix For: 1.4.1
>
>          Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> There are some errors when I running 
> org.apache.carbondata.examples.SearchModeExample:
> {code:java}
> org.apache.carbondata.examples.SearchModeExample
> log4j:WARN No appenders could be found for logger 
> (org.apache.carbondata.core.util.CarbonProperties).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> 18/05/22 16:12:42 INFO SparkContext: Running Spark version 2.2.1
> 18/05/22 16:12:42 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 18/05/22 16:12:42 WARN Utils: Your hostname, localhost resolves to a loopback 
> address: 127.0.0.1; using 192.168.44.90 instead (on interface en3)
> 18/05/22 16:12:42 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 18/05/22 16:12:42 INFO SparkContext: Submitted application: SearchModeExample
> 18/05/22 16:12:42 INFO SecurityManager: Changing view acls to: xubo
> 18/05/22 16:12:42 INFO SecurityManager: Changing modify acls to: xubo
> 18/05/22 16:12:42 INFO SecurityManager: Changing view acls groups to: 
> 18/05/22 16:12:42 INFO SecurityManager: Changing modify acls groups to: 
> 18/05/22 16:12:42 INFO SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users  with view permissions: Set(xubo); groups 
> with view permissions: Set(); users  with modify permissions: Set(xubo); 
> groups with modify permissions: Set()
> 18/05/22 16:12:43 INFO Utils: Successfully started service 'sparkDriver' on 
> port 64124.
> 18/05/22 16:12:43 INFO SparkEnv: Registering MapOutputTracker
> 18/05/22 16:12:43 INFO SparkEnv: Registering BlockManagerMaster
> 18/05/22 16:12:43 INFO BlockManagerMasterEndpoint: Using 
> org.apache.spark.storage.DefaultTopologyMapper for getting topology 
> information
> 18/05/22 16:12:43 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint 
> up
> 18/05/22 16:12:43 INFO DiskBlockManager: Created local directory at 
> /private/var/folders/lw/4y5plg0x7rq45h38m4sfxlbm0000gn/T/blockmgr-0ed23439-9e4f-4798-b197-0681f40e9fa5
> 18/05/22 16:12:43 INFO MemoryStore: MemoryStore started with capacity 2004.6 
> MB
> 18/05/22 16:12:43 INFO SparkEnv: Registering OutputCommitCoordinator
> 18/05/22 16:12:43 INFO Utils: Successfully started service 'SparkUI' on port 
> 4040.
> 18/05/22 16:12:43 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
> http://192.168.44.90:4040
> 18/05/22 16:12:43 INFO Executor: Starting executor ID driver on host localhost
> 18/05/22 16:12:43 INFO Utils: Successfully started service 
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 64125.
> 18/05/22 16:12:43 INFO NettyBlockTransferService: Server created on 
> 192.168.44.90:64125
> 18/05/22 16:12:43 INFO BlockManager: Using 
> org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
> policy
> 18/05/22 16:12:43 INFO BlockManagerMaster: Registering BlockManager 
> BlockManagerId(driver, 192.168.44.90, 64125, None)
> 18/05/22 16:12:43 INFO BlockManagerMasterEndpoint: Registering block manager 
> 192.168.44.90:64125 with 2004.6 MB RAM, BlockManagerId(driver, 192.168.44.90, 
> 64125, None)
> 18/05/22 16:12:43 INFO BlockManagerMaster: Registered BlockManager 
> BlockManagerId(driver, 192.168.44.90, 64125, None)
> 18/05/22 16:12:43 INFO BlockManager: Initialized BlockManager: 
> BlockManagerId(driver, 192.168.44.90, 64125, None)
> 18/05/22 16:12:43 INFO SharedState: Setting hive.metastore.warehouse.dir 
> ('null') to the value of spark.sql.warehouse.dir 
> ('file:/Users/xubo/Desktop/xubo/git/carbondata1/spark-warehouse').
> 18/05/22 16:12:43 INFO SharedState: Warehouse path is 
> 'file:/Users/xubo/Desktop/xubo/git/carbondata1/spark-warehouse'.
> 18/05/22 16:12:44 INFO HiveUtils: Initializing HiveMetastoreConnection 
> version 1.2.1 using Spark classes.
> 18/05/22 16:12:45 INFO HiveMetaStore: 0: Opening raw store with implemenation 
> class:org.apache.hadoop.hive.metastore.ObjectStore
> 18/05/22 16:12:45 INFO ObjectStore: ObjectStore, initialize called
> 18/05/22 16:12:45 INFO Persistence: Property 
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 18/05/22 16:12:45 INFO Persistence: Property datanucleus.cache.level2 unknown 
> - will be ignored
> 18/05/22 16:12:46 INFO ObjectStore: Setting MetaStore object pin classes with 
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 18/05/22 16:12:47 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
> "embedded-only" so does not have its own datastore table.
> 18/05/22 16:12:47 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" 
> so does not have its own datastore table.
> 18/05/22 16:12:47 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
> "embedded-only" so does not have its own datastore table.
> 18/05/22 16:12:47 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" 
> so does not have its own datastore table.
> 18/05/22 16:12:47 INFO Query: Reading in results for query 
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is 
> closing
> 18/05/22 16:12:47 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is 
> DERBY
> 18/05/22 16:12:47 INFO ObjectStore: Initialized ObjectStore
> 18/05/22 16:12:47 INFO HiveMetaStore: Added admin role in metastore
> 18/05/22 16:12:47 INFO HiveMetaStore: Added public role in metastore
> 18/05/22 16:12:47 INFO HiveMetaStore: No user is added in admin role, since 
> config is empty
> 18/05/22 16:12:48 INFO HiveMetaStore: 0: get_all_databases
> 18/05/22 16:12:48 INFO audit: ugi=xubo        ip=unknown-ip-addr      
> cmd=get_all_databases   
> 18/05/22 16:12:48 INFO HiveMetaStore: 0: get_functions: db=default pat=*
> 18/05/22 16:12:48 INFO audit: ugi=xubo        ip=unknown-ip-addr      
> cmd=get_functions: db=default pat=*     
> 18/05/22 16:12:48 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
> "embedded-only" so does not have its own datastore table.
> 18/05/22 16:12:48 INFO SessionState: Created local directory: 
> /var/folders/lw/4y5plg0x7rq45h38m4sfxlbm0000gn/T/af495878-1838-415e-98e8-83d405d8cac4_resources
> 18/05/22 16:12:48 INFO SessionState: Created HDFS directory: 
> /tmp/hive/xubo/af495878-1838-415e-98e8-83d405d8cac4
> 18/05/22 16:12:48 INFO SessionState: Created local directory: 
> /var/folders/lw/4y5plg0x7rq45h38m4sfxlbm0000gn/T/xubo/af495878-1838-415e-98e8-83d405d8cac4
> 18/05/22 16:12:48 INFO SessionState: Created HDFS directory: 
> /tmp/hive/xubo/af495878-1838-415e-98e8-83d405d8cac4/_tmp_space.db
> 18/05/22 16:12:48 INFO HiveClientImpl: Warehouse location for Hive client 
> (version 1.2.1) is 
> file:/Users/xubo/Desktop/xubo/git/carbondata1/spark-warehouse
> 18/05/22 16:12:48 INFO HiveMetaStore: 0: get_database: default
> 18/05/22 16:12:48 INFO audit: ugi=xubo        ip=unknown-ip-addr      
> cmd=get_database: default       
> 18/05/22 16:12:48 INFO HiveMetaStore: 0: get_database: global_temp
> 18/05/22 16:12:48 INFO audit: ugi=xubo        ip=unknown-ip-addr      
> cmd=get_database: global_temp   
> 18/05/22 16:12:48 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> 18/05/22 16:12:48 INFO SessionState: Created local directory: 
> /var/folders/lw/4y5plg0x7rq45h38m4sfxlbm0000gn/T/b17d0cc6-c9ad-48f5-8df5-df9b88fe3736_resources
> 18/05/22 16:12:48 INFO SessionState: Created HDFS directory: 
> /tmp/hive/xubo/b17d0cc6-c9ad-48f5-8df5-df9b88fe3736
> 18/05/22 16:12:48 INFO SessionState: Created local directory: 
> /var/folders/lw/4y5plg0x7rq45h38m4sfxlbm0000gn/T/xubo/b17d0cc6-c9ad-48f5-8df5-df9b88fe3736
> 18/05/22 16:12:48 INFO SessionState: Created HDFS directory: 
> /tmp/hive/xubo/b17d0cc6-c9ad-48f5-8df5-df9b88fe3736/_tmp_space.db
> 18/05/22 16:12:48 INFO HiveClientImpl: Warehouse location for Hive client 
> (version 1.2.1) is 
> file:/Users/xubo/Desktop/xubo/git/carbondata1/spark-warehouse
> 18/05/22 16:12:48 INFO StateStoreCoordinatorRef: Registered 
> StateStoreCoordinator endpoint
> 18/05/22 16:12:49 AUDIT CarbonDropTableCommand: 
> [localhost][xubo][Thread-1]Deleting table [carbonsession_table] under 
> database [default]
> 18/05/22 16:12:50 AUDIT CarbonDropTableCommand: 
> [localhost][xubo][Thread-1]Deleted table [carbonsession_table] under database 
> [default]
> 18/05/22 16:12:50 AUDIT CarbonCreateTableCommand: 
> [localhost][xubo][Thread-1]Creating Table with Database name [default] and 
> Table name [carbonsession_table]
> 18/05/22 16:12:51 AUDIT CarbonCreateTableCommand: 
> [localhost][xubo][Thread-1]Table created with Database name [default] and 
> Table name [carbonsession_table]
> 18/05/22 16:12:52 AUDIT CarbonDataRDDFactory$: 
> [localhost][xubo][Thread-1]Data load request has been received for table 
> default.carbonsession_table
> 18/05/22 16:12:52 AUDIT CarbonDataRDDFactory$: 
> [localhost][xubo][Thread-1]Data load is successful for 
> default.carbonsession_table
> search mode asynchronous query
> 18/05/22 16:12:53 ERROR CarbonSession: Exception when executing search mode: 
> null, fallback to SparkSQL
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
>       at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>       at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$org$apache$carbondata$examples$SearchModeExample$$runAsynchrousSQL$1.apply(SearchModeExample.scala:179)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$org$apache$carbondata$examples$SearchModeExample$$runAsynchrousSQL$1.apply(SearchModeExample.scala:179)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at 
> org.apache.carbondata.examples.SearchModeExample$.org$apache$carbondata$examples$SearchModeExample$$runAsynchrousSQL(SearchModeExample.scala:179)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$exampleBody$1.apply$mcV$sp(SearchModeExample.scala:113)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$exampleBody$1.apply(SearchModeExample.scala:113)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$exampleBody$1.apply(SearchModeExample.scala:113)
>       at 
> org.apache.spark.sql.catalyst.util.package$.benchmark(package.scala:129)
>       at 
> org.apache.carbondata.examples.SearchModeExample$.exampleBody(SearchModeExample.scala:112)
>       at 
> org.apache.carbondata.examples.SearchModeExample$.main(SearchModeExample.scala:70)
>       at 
> org.apache.carbondata.examples.SearchModeExample.main(SearchModeExample.scala)
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMap.prune(BlockletDataMap.java:659)
>       at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMap.prune(BlockletDataMap.java:705)
>       at 
> org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:101)
>       at 
> org.apache.carbondata.core.datamap.dev.expr.DataMapExprWrapperImpl.prune(DataMapExprWrapperImpl.java:52)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:409)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:346)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:525)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:249)
>       at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.getPartitions(CarbonScanRDD.scala:121)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>       at scala.Option.getOrElse(Option.scala:121)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>       at scala.Option.getOrElse(Option.scala:121)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>       at scala.Option.getOrElse(Option.scala:121)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>       at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>       at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>       at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2861)
>       at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2387)
>       at 
> org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2387)
>       at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2842)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>       at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2841)
>       at org.apache.spark.sql.Dataset.collect(Dataset.scala:2387)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$3$$anon$1.run(SearchModeExample.scala:174)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> {code}
> error2:
> {code:java}
> 18/05/30 15:48:41 INFO SessionState: Created HDFS directory: 
> /tmp/hive/xubo/125f7359-dbef-42f6-99a2-12a3ad07d2e9/_tmp_space.db
> 18/05/30 15:48:41 INFO HiveClientImpl: Warehouse location for Hive client 
> (version 1.2.1) is 
> file:/Users/xubo/Desktop/xubo/git/carbondata2/spark-warehouse/
> 18/05/30 15:48:41 INFO StateStoreCoordinatorRef: Registered 
> StateStoreCoordinator endpoint
> 18/05/30 15:48:42 AUDIT CarbonCreateTableCommand: 
> [localhost][xubo][Thread-1]Creating Table with Database name [default] and 
> Table name [carbonsession_table]
> 18/05/30 15:48:42 ERROR DataMapStoreManager: main failed to get carbon table 
> from table Path
> 18/05/30 15:48:43 AUDIT CarbonCreateTableCommand: 
> [localhost][xubo][Thread-1]Table created with Database name [default] and 
> Table name [carbonsession_table]
> 18/05/30 15:48:44 AUDIT CarbonDataRDDFactory$: 
> [localhost][xubo][Thread-1]Data load request has been received for table 
> default.carbonsession_table
> 18/05/30 15:48:44 AUDIT CarbonDataRDDFactory$: 
> [localhost][xubo][Thread-1]Data load is successful for 
> default.carbonsession_table
> search mode asynchronous query
> 2605.495751ms
> search mode synchronous query
> 3627.856844ms
> sparksql asynchronous query
> 2951.804416ms
> sparksql synchronous query
> 9028.520219ms
> search mode asynchronous query
> java.lang.NullPointerException
>       at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMap.prune(BlockletDataMap.java:679)
>       at 
> org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:101)
>       at 
> org.apache.carbondata.core.datamap.dev.expr.DataMapExprWrapperImpl.prune(DataMapExprWrapperImpl.java:52)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:436)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:373)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:525)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:249)
>       at org.apache.spark.rpc.Master.pruneBlock(Master.scala:271)
>       at org.apache.spark.rpc.Master.search(Master.scala:217)
>       at 
> org.apache.carbondata.store.SparkCarbonStore.search(SparkCarbonStore.scala:144)
>       at org.apache.spark.sql.CarbonSession.runSearch(CarbonSession.scala:225)
>       at 
> org.apache.spark.sql.CarbonSession.org$apache$spark$sql$CarbonSession$$trySearchMode(CarbonSession.scala:180)
>       at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:100)
>       at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:97)
>       at 
> org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:156)
>       at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:95)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$3$$anon$1.run(SearchModeExample.scala:168)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/05/30 15:49:04 ERROR CarbonSession: Exception when executing search mode: 
> null, fallback to SparkSQL
> java.lang.NullPointerException
>       at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMap.prune(BlockletDataMap.java:659)
>       at 
> org.apache.carbondata.core.indexstore.blockletindex.BlockletDataMap.prune(BlockletDataMap.java:705)
>       at 
> org.apache.carbondata.core.datamap.TableDataMap.prune(TableDataMap.java:101)
>       at 
> org.apache.carbondata.core.datamap.dev.expr.DataMapExprWrapperImpl.prune(DataMapExprWrapperImpl.java:52)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getPrunedBlocklets(CarbonInputFormat.java:436)
>       at 
> org.apache.carbondata.hadoop.api.CarbonInputFormat.getDataBlocksOfSegment(CarbonInputFormat.java:373)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:525)
>       at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getSplits(CarbonTableInputFormat.java:249)
>       at org.apache.spark.rpc.Master.pruneBlock(Master.scala:271)
>       at org.apache.spark.rpc.Master.search(Master.scala:217)
>       at 
> org.apache.carbondata.store.SparkCarbonStore.search(SparkCarbonStore.scala:144)
>       at org.apache.spark.sql.CarbonSession.runSearch(CarbonSession.scala:225)
>       at 
> org.apache.spark.sql.CarbonSession.org$apache$spark$sql$CarbonSession$$trySearchMode(CarbonSession.scala:180)
>       at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:100)
>       at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:97)
>       at 
> org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:156)
>       at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:95)
>       at 
> org.apache.carbondata.examples.SearchModeExample$$anonfun$3$$anon$1.run(SearchModeExample.scala:168)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> 18/05/30 15:49:04 ERROR CarbonSession: Exception when executing search mode: 
> null, fallback to SparkSQL
> 815.116785ms
> search mode synchronous query
> 2672.414774ms
> sparksql asynchronous query
> 1798.036148ms
> sparksql synchronous query
> 7715.027916ms
> 18/05/30 15:49:16 AUDIT CarbonDropTableCommand: 
> [localhost][xubo][Thread-1]Deleting table [carbonsession_table] under 
> database [default]
> 18/05/30 15:49:17 AUDIT CarbonDropTableCommand: 
> [localhost][xubo][Thread-1]Deleted table [carbonsession_table] under database 
> [default]
> Finished!
> Process finished with exit code 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to