[ 
https://issues.apache.org/jira/browse/SPARK-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-5820:
-----------------------------
    Component/s: Examples
       Priority: Minor  (was: Major)

> Example does not work when using SOCKS proxy
> --------------------------------------------
>
>                 Key: SPARK-5820
>                 URL: https://issues.apache.org/jira/browse/SPARK-5820
>             Project: Spark
>          Issue Type: Bug
>          Components: Examples
>    Affects Versions: 1.2.1
>            Reporter: Eric O. LEBIGOT (EOL)
>            Priority: Minor
>
> When using a SOCKS proxy (on OS X 10.10.2), running even the basic example 
> ./bin/run-example SparkPi 10 fails.
> -- Partial log --
> 15/02/14 23:23:00 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; 
> aborting job
> 15/02/14 23:23:00 INFO TaskSchedulerImpl: Cancelling stage 0
> 15/02/14 23:23:00 INFO Executor: Executor is trying to kill task 1.0 in stage 
> 0.0 (TID 1)
> 15/02/14 23:23:00 INFO TaskSchedulerImpl: Stage 0 was cancelled
> 15/02/14 23:23:00 INFO DAGScheduler: Job 0 failed: reduce at 
> SparkPi.scala:35, took 1.920223 s
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: 
> Lost task 0.0 in stage 0.0 (TID 0, localhost): java.net.SocketException: 
> Malformed reply from SOCKS server
>         at java.net.SocksSocketImpl.readSocksReply(SocksSocketImpl.java:129)
>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:503)
>         at java.net.Socket.connect(Socket.java:579)
>         at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>         at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
>         at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1003)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:951)
>         at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>         at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:582)
>         at org.apache.spark.util.Utils$.fetchFile(Utils.scala:433)
>         at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:356)
>         at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:353)
>         at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
>         at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
>         at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:353)
>         at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:181)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> Driver stacktrace:
>         at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
>         at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>         at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>         at scala.Option.foreach(Option.scala:245)
>         at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
>         at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>         at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>         at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>         at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to