[ 
https://issues.apache.org/jira/browse/SPARK-32604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176992#comment-17176992
 ] 

Hyukjin Kwon commented on SPARK-32604:
--------------------------------------

Are you interested in submitting a PR to fix?

> Bug in ALSModel Python Documentation
> ------------------------------------
>
>                 Key: SPARK-32604
>                 URL: https://issues.apache.org/jira/browse/SPARK-32604
>             Project: Spark
>          Issue Type: Bug
>          Components: Documentation, PySpark
>    Affects Versions: 2.4.0, 3.0.0
>            Reporter: Zach Cahoone
>            Priority: Minor
>
> In the ALSModel documentation 
> ([https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]), 
> there is a bug which causes data frame creation to fail with the following 
> error:
> {code:java}
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
> in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 
> (TID 15, 10.0.0.133, executor 10): 
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 372, 
> in main
>     process()
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 367, 
> in process
>     serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 
> 390, in dump_stream
>     vs = list(itertools.islice(iterator, batch))
>   File "/usr/lib/spark/python/pyspark/rdd.py", line 1354, in takeUpToNumLeft
>     yield next(iterator)
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/util.py", line 99, in 
> wrapper
>     return f(*args, **kwargs)
>   File "<ipython-input-5-86574b26abad>", line 24, in <lambda>
> NameError: name 'long' is not defined
>       at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:592)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:575)
>       at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
>       at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>       at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
>       at 
> org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
>       at 
> org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
>       at 
> org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
>       at 
> org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
>       at 
> org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:121)
>       at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1890)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1877)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:929)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:929)
>       at scala.Option.foreach(Option.scala:257)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:929)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2111)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2060)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2049)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:740)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2081)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2102)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2121)
>       at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:153)
>       at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>       at py4j.Gateway.invoke(Gateway.java:282)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:238)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most 
> recent call last):
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 372, 
> in main
>     process()
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 367, 
> in process
>     serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 
> 390, in dump_stream
>     vs = list(itertools.islice(iterator, batch))
>   File "/usr/lib/spark/python/pyspark/rdd.py", line 1354, in takeUpToNumLeft
>     yield next(iterator)
>   File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/util.py", line 99, in 
> wrapper
>     return f(*args, **kwargs)
>   File "<ipython-input-5-86574b26abad>", line 24, in <lambda>
> NameError: name 'long' is not defined
>       at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:592)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:575)
>       at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
>       at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>       at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
>       at 
> org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
>       at 
> org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
>       at 
> org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
>       at 
> org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
>       at 
> org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:121)
>       at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       ... 1 more
> {code}
> To replicate the error try to train an ALSModel with the documentation code.
>  
> To fix, change "long" to "int" in the following line:
> {code:java}
> ratingsRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]),
>  rating=float(p[2]), timestamp=long(p[3]))){code}
>  
> The referenced example code already has this change, but it has not been 
> updated in the documentation:
> [https://github.com/apache/spark/blob/master/examples/src/main/python/ml/als_example.py]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to