I created a Streaming k means based on scala example. It keeps running without any error but never prints predictions
Here is Log 19:15:05,050 INFO org.apache.spark.streaming.scheduler.InputInfoTracker - remove old batch metadata: 1461678240000 ms 19:15:10,001 INFO org.apache.spark.streaming.dstream.FileInputDStream - Finding new files took 1 ms 19:15:10,001 INFO org.apache.spark.streaming.dstream.FileInputDStream - New files at time 1461678310000 ms: 19:15:10,007 INFO org.apache.spark.streaming.dstream.FileInputDStream - Finding new files took 2 ms 19:15:10,007 INFO org.apache.spark.streaming.dstream.FileInputDStream - New files at time 1461678310000 ms: 19:15:10,014 INFO org.apache.spark.streaming.scheduler.JobScheduler - Added jobs for time 1461678310000 ms 19:15:10,015 INFO org.apache.spark.streaming.scheduler.JobScheduler - Starting job streaming job 1461678310000 ms.0 from job set of time 1461678310000 ms 19:15:10,028 INFO org.apache.spark.SparkContext - Starting job: collect at StreamingKMeans.scala:89 19:15:10,028 INFO org.apache.spark.scheduler.DAGScheduler - Job 292 finished: collect at StreamingKMeans.scala:89, took 0.000041 s 19:15:10,029 INFO org.apache.spark.streaming.scheduler.JobScheduler - Finished job streaming job 1461678310000 ms.0 from job set of time 1461678310000 ms 19:15:10,029 INFO org.apache.spark.streaming.scheduler.JobScheduler - Starting job streaming job 1461678310000 ms.1 from job set of time 1461678310000 ms ------------------------------------------- Time: 1461678310000 ms ------------------------------------------- 19:15:10,036 INFO org.apache.spark.streaming.scheduler.JobScheduler - Finished job streaming job 1461678310000 ms.1 from job set of time 1461678310000 ms 19:15:10,036 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2912 from persistence list 19:15:10,037 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2911 from persistence list 19:15:10,037 INFO org.apache.spark.storage.BlockManager - Removing RDD 2912 19:15:10,037 INFO org.apache.spark.streaming.scheduler.JobScheduler - Total delay: 0.036 s for time 1461678310000 ms (execution: 0.021 s) 19:15:10,037 INFO org.apache.spark.rdd.UnionRDD - Removing RDD 2800 from persistence list 19:15:10,037 INFO org.apache.spark.storage.BlockManager - Removing RDD 2911 19:15:10,037 INFO org.apache.spark.streaming.dstream.FileInputDStream - Cleared 1 old files that were older than 1461678250000 ms: 1461678245000 ms 19:15:10,037 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2917 from persistence list 19:15:10,037 INFO org.apache.spark.storage.BlockManager - Removing RDD 2800 19:15:10,037 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2916 from persistence list 19:15:10,037 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2915 from persistence list 19:15:10,037 INFO org.apache.spark.rdd.MapPartitionsRDD - Removing RDD 2914 from persistence list 19:15:10,037 INFO org.apache.spark.rdd.UnionRDD - Removing RDD 2803 from persistence list 19:15:10,037 INFO org.apache.spark.streaming.dstream.FileInputDStream - Cleared 1 old files that were older than 1461678250000 ms: 1461678245000 ms 19:15:10,038 INFO org.apache.spark.streaming.scheduler.ReceivedBlockTracker - Deleting batches ArrayBuffer() 19:15:10,038 INFO org.apache.spark.storage.BlockManager - Removing RDD 2917 19:15:10,038 INFO org.apache.spark.streaming.scheduler.InputInfoTracker - remove old batch metadata: 1461678245000 ms 19:15:10,038 INFO org.apache.spark.storage.BlockManager - Removing RDD 2914 19:15:10,038 INFO org.apache.spark.storage.BlockManager - Removing RDD 2916 19:15:10,038 INFO org.apache.spark.storage.BlockManager - Removing RDD 2915 19:15:10,038 INFO org.apache.spark.storage.BlockManager - Removing RDD 2803 19:15:15,001 INFO org.apache.spark.streaming.dstream.FileInputDStream - Finding new files took 1 ms 19:15:15,001 INFO org.apache.spark.streaming.dstream.FileInputDStream - New files at time 1461678315000 ms: .
StreamingKmeans.java
Description: Binary data
--------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org