---------- Forwarded message ----------
From: "Balachandar R.A." <balachandar...@gmail.com>
Date: 02-Nov-2015 12:53 pm
Subject: Re: Error : - No filesystem for scheme: spark
To: "Jean-Baptiste Onofré" <j...@nanthrax.net>
Cc:

> HI JB,
> Thanks for the response,
> Here is the content of my spark-defaults.conf
>
>
> # Default system properties included when running spark-submit.
> # This is useful for setting default environmental settings.
>
> # Example:
>  spark.master                     spark://fdoat:7077
> # spark.eventLog.enabled           true
>  spark.eventLog.dir                /home/bala/spark-logs
> # spark.eventLog.dir               hdfs://namenode:8021/directory
> # spark.serializer
org.apache.spark.serializer.KryoSerializer
> # spark.driver.memory              5g
> # spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value
-Dnumbers="one two three"
>
>
> regards
> Bala
>
> On 2 November 2015 at 12:21, Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
>>
>> Hi,
>>
>> do you have something special in conf/spark-defaults.conf (especially on
the eventLog directory) ?
>>
>> Regards
>> JB
>>
>>
>> On 11/02/2015 07:48 AM, Balachandar R.A. wrote:
>>>
>>> Can someone tell me at what point this error could come?
>>>
>>> In one of my use cases, I am trying to use hadoop custom input format.
>>> Here is my code.
>>>
>>> |valhConf:Configuration=sc.hadoopConfiguration
>>>
hConf.set("fs.hdfs.impl",classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)hConf.set("fs.file.impl",classOf[org.apache.hadoop.fs.LocalFileSystem].getName)varjob
>>>
=newJob(hConf)FileInputFormat.setInputPaths(job,newPath("hdfs:///user/bala/MyBinaryFile"));varhRDD
>>>
=newNewHadoopRDD(sc,classOf[RandomAccessInputFormat],classOf[IntWritable],classOf[BytesWritable],job.getConfiguration())valcount
>>> =hRDD.mapPartitionsWithInputSplit{(split,iter)=>myfuncPart(split,iter)}|
>>>
>>> |The moment I invoke mapPartitionsWithInputSplit() method, I get the
>>> below error in my spark-submit launch|
>>>
>>> |
>>> |
>>>
>>> |15/10/3011:11:39WARN scheduler.TaskSetManager:Losttask 0.0in stage
>>> 0.0(TID 0,40.221.94.235):java.io.IOException:NoFileSystemforscheme:spark
>>> at
>>>
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)|
>>>
>>> Any help here to move towards fixing this will be of great help
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bala
>>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>

Reply via email to