All, I identified the reason of my problem. Regards. D

From: viladid...@hotmail.com
To: user@spark.apache.org
Subject: First project in scala IDE : first problem
Date: Mon, 9 Nov 2015 23:39:55 +0000




All,
This is my first run with scala and maven on spark using scala IDE on my single 
computer.
I have the following problem. 
Thanks by advance 
Didier
Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties15/11/09 23:30:52 INFO SparkContext: 
Running Spark version 1.4.015/11/09 23:30:53 WARN NativeCodeLoader: Unable to 
load native-hadoop library for your platform... using builtin-java classes 
where applicable15/11/09 23:30:53 INFO SecurityManager: Changing view acls to: 
dv18601015/11/09 23:30:53 INFO SecurityManager: Changing modify acls to: 
dv18601015/11/09 23:30:53 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(dv186010); users 
with modify permissions: Set(dv186010)15/11/09 23:30:54 INFO Slf4jLogger: 
Slf4jLogger started15/11/09 23:30:54 INFO Remoting: Starting remoting15/11/09 
23:30:54 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkDriver@192.168.28.1:59209]15/11/09 23:30:54 INFO Utils: 
Successfully started service 'sparkDriver' on port 59209.15/11/09 23:30:54 INFO 
SparkEnv: Registering MapOutputTracker15/11/09 23:30:54 INFO SparkEnv: 
Registering BlockManagerMaster15/11/09 23:30:54 INFO DiskBlockManager: Created 
local directory at 
C:\Users\DV186010\AppData\Local\Temp\spark-123c0ccc-d677-4fae-b9fd-41b9b243905e\blockmgr-65d29cdd-d04f-48f4-85ba-3df96ee4aca715/11/09
 23:30:54 INFO MemoryStore: MemoryStore started with capacity 1955.6 MB15/11/09 
23:30:54 INFO HttpFileServer: HTTP File server directory is 
C:\Users\DV186010\AppData\Local\Temp\spark-123c0ccc-d677-4fae-b9fd-41b9b243905e\httpd-62fbcdb8-3fbd-4206-9235-6a9586a3a6a115/11/09
 23:30:54 INFO HttpServer: Starting HTTP Server15/11/09 23:30:54 INFO Utils: 
Successfully started service 'HTTP file server' on port 59210.15/11/09 23:30:54 
INFO SparkEnv: Registering OutputCommitCoordinator15/11/09 23:30:55 INFO Utils: 
Successfully started service 'SparkUI' on port 4040.15/11/09 23:30:55 INFO 
SparkUI: Started SparkUI at http://192.168.28.1:404015/11/09 23:30:55 INFO 
Executor: Starting executor ID driver on host localhost15/11/09 23:30:55 INFO 
Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 
59229.15/11/09 23:30:55 INFO NettyBlockTransferService: Server created on 
5922915/11/09 23:30:55 INFO BlockManagerMaster: Trying to register 
BlockManager15/11/09 23:30:55 INFO BlockManagerMasterEndpoint: Registering 
block manager localhost:59229 with 1955.6 MB RAM, BlockManagerId(driver, 
localhost, 59229)15/11/09 23:30:55 INFO BlockManagerMaster: Registered 
BlockManager15/11/09 23:30:56 INFO MemoryStore: ensureFreeSpace(110248) called 
with curMem=0, maxMem=205060571115/11/09 23:30:56 INFO MemoryStore: Block 
broadcast_0 stored as values in memory (estimated size 107.7 KB, free 1955.5 
MB)15/11/09 23:30:56 INFO MemoryStore: ensureFreeSpace(10090) called with 
curMem=110248, maxMem=205060571115/11/09 23:30:56 INFO MemoryStore: Block 
broadcast_0_piece0 stored as bytes in memory (estimated size 9.9 KB, free 
1955.5 MB)15/11/09 23:30:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in 
memory on localhost:59229 (size: 9.9 KB, free: 1955.6 MB)15/11/09 23:30:56 INFO 
SparkContext: Created broadcast 0 from textFile at WordCount.scala:1515/11/09 
23:30:57 ERROR Shell: Failed to locate the winutils binary in the hadoop binary 
pathjava.io.IOException: Could not locate executable null\bin\winutils.exe in 
the Hadoop binaries.   at 
org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)     at 
org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at 
org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)        at 
org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)     at 
org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
     at 
org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:978)
        at 
org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:978)
        at 
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) 
     at 
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) 
     at scala.Option.map(Option.scala:145)   at 
org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)       at 
org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)    at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)  at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)  at 
scala.Option.getOrElse(Option.scala:120)     at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:217)   at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)  
     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)  at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)  at 
scala.Option.getOrElse(Option.scala:120)     at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:217)   at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)  
     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)  at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)  at 
scala.Option.getOrElse(Option.scala:120)     at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:217)   at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)  
     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)  at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)  at 
scala.Option.getOrElse(Option.scala:120)     at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:217)   at 
org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)       at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
       at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
       at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)  
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)  
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at 
org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:289)   
     at org.test.spark.WordCount$.main(WordCount.scala:21)   at 
org.test.spark.WordCount.main(WordCount.scala)15/11/09 23:30:57 INFO 
FileInputFormat: Total input paths to process : 1Exception in thread "main" 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
file:/C:/Users/DV186010/scalaworkspace/spark/food.count.txt already exists    
at 
org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1089)
       at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065)
      at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065)
      at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)  
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)  
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at 
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1065)
       at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:989)
   at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965)
  at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)  
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)  
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at 
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:965)
   at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:897)
   at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:897)
  at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:897)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)  
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)  
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at 
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:896)
   at 
org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1400) 
     at 
org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1379)     at 
org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1379)     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)  
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)  
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at 
org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1379)      at 
org.test.spark.WordCount$.main(WordCount.scala:22)   at 
org.test.spark.WordCount.main(WordCount.scala)15/11/09 23:30:57 INFO 
SparkContext: Invoking stop() from shutdown hook15/11/09 23:30:57 INFO SparkUI: 
Stopped Spark web UI at http://192.168.28.1:404015/11/09 23:30:57 INFO 
DAGScheduler: Stopping DAGScheduler15/11/09 23:30:57 INFO 
MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!15/11/09 
23:30:57 INFO Utils: path = 
C:\Users\DV186010\AppData\Local\Temp\spark-123c0ccc-d677-4fae-b9fd-41b9b243905e\blockmgr-65d29cdd-d04f-48f4-85ba-3df96ee4aca7,
 already present as root for deletion.15/11/09 23:30:57 INFO MemoryStore: 
MemoryStore cleared15/11/09 23:30:57 INFO BlockManager: BlockManager 
stopped15/11/09 23:30:57 INFO BlockManagerMaster: BlockManagerMaster 
stopped15/11/09 23:30:57 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!15/11/09 23:30:57 INFO SparkContext: 
Successfully stopped SparkContext15/11/09 23:30:57 INFO Utils: Shutdown hook 
called15/11/09 23:30:57 INFO Utils: Deleting directory 
C:\Users\DV186010\AppData\Local\Temp\spark-123c0ccc-d677-4fae-b9fd-41b9b243905e15/11/09
 23:30:57 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote 
daemon.

                                                                                
  

Reply via email to