Error executing using alternating least square

2015-10-08 Thread haridass saisriram
Hi,

   I downloaded spark 1.5.0 on windows 7 and built it using

 build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package


and tried running the Alternating least square example (
http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html
) using spark-shell

import org.apache.spark.mllib.recommendation.ALSimport
org.apache.spark.mllib.recommendation.MatrixFactorizationModelimport
org.apache.spark.mllib.recommendation.Ratingval data =
sc.textFile("C://cygwin64//home//test.txt")val ratings =
data.map(_.split(',') match { case Array(user, item, rate) =>
Rating(user.toInt, item.toInt, rate.toDouble)
  })val rank = 10val numIterations = 10val model = ALS.train(ratings,
rank, numIterations, 0.01)

after ALS.train I get the following error


5/10/08 11:01:08 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 3)
java.lang.AssertionError: assertion failed:
Current position 1413 do not equal to expected position 707
after transferTo, please check your kernel version to see if it is 2.6.32,
this is a kernel bug which will lead to unexpected behavior when using
transferTo.
You can set spark.file.transferTo = false to disable this NIO feature.

at scala.Predef$.assert(Predef.scala:179)
at 
org.apache.spark.util.Utils$$anonfun$copyStream$1.apply$mcJ$sp(Utils.scala:273)
at 
org.apache.spark.util.Utils$$anonfun$copyStream$1.apply(Utils.scala:252)
at 
org.apache.spark.util.Utils$$anonfun$copyStream$1.apply(Utils.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1206)
at org.apache.spark.util.Utils$.copyStream(Utils.scala:292)
at org.apache.spark.util.Utils.copyStream(Utils.scala)
at 
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.writePartitionedFile(BypassMergeSortShuffleWriter.java:149)
at 
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:80)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
at org.apache.spark.rdd.RDD.count(RDD.scala:1121)
at org.apache.spark.ml.recommendation.ALS$.train(ALS.scala:550)
at org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:239)
at org.apache.spark.mllib.recommendation.ALS$.train(ALS.scala:328)
at org.apache.spark.mllib.recommendation.ALS$.train(ALS.scala:346)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:37)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:39)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:41)
at $iwC$$iwC$$iwC$$iwC.(:43)
at $iwC$$iwC$$iwC.(:45)
at $iwC$$iwC.(:47)
at $iwC.(:49)
at (:51)
at .(:55)
at .()
at .(:7)
  

SparkSQL: Reading data from hdfs and storing into multiple paths

2015-10-01 Thread haridass saisriram
Hi,

  I am trying to find a simple example to read a data file on HDFS. The
file has the following format
a , b  , c ,,mm
a1,b1,c1,2015,09
a2,b2,c2,2014,08


I would like to read this file and store it in HDFS partitioned by year and
month. Something like this
/path/to/hdfs//mm

I want to specify the "/path/to/hdfs/" and /mm should be populated
automatically based on those columns. Could some one point me in the right
direction

Thank you,
Sri Ram