Re: Apache Spark Structured Streaming - Kafka Streaming - Option to ignore checkpoint

2018-06-06 Thread licl
I met the same issue and I have try to delete the checkpoint dir before the job , But spark seems can read the correct offset even though after the checkpoint dir is deleted , I don't know how spark do this without checkpoint's metadata. -- Sent from:

Imporvement the cube with the Fast Cubing In apache Kylin

2016-03-15 Thread licl
HI, I tried to build a cube on a 100 million data set. When I set 9 fields to build the cube with 10 cores. It nearly coast me a whole day to finish the job. At the same time, it generate almost 1”TB“ data in the "/tmp“ folder. Could we refer to the ”fast cube“ algorithm in apache Kylin To make

Can't read data correctly through beeline when data is save by HiveContext

2015-12-22 Thread licl
Hi, Here is my javacode; SparkConf sparkConf = Constance.getSparkConf(); JavaSparkContext sc = new JavaSparkContext(sparkConf); SQLContext sql = new SQLContext(sc); HiveContext sqlContext = new HiveContext(sc.sc()); List fields = new

Re: Can't read data correctly through beeline when data is save by HiveContext

2015-12-22 Thread licl
I know the reason now I change the metastore with javacode But the thriftserver cache the metastore in memory,it need refresh from the mysql; but how?? -- View this message in context:

Re: Can't read data correctly through beeline when data is save by HiveContext

2015-12-22 Thread licl
i solove this now; just run 'refresh table shop.id' on beeline; -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Can-t-read-data-correctly-through-beeline-when-data-is-save-by-HiveContext-tp25774p25779.html Sent from the Apache Spark User List mailing list