Ah ok. Good catch ;)

Regards
JB

On 11/02/2015 11:51 AM, Balachandar R.A. wrote:
I made a stupid mistake it seems. I supplied the --master option to the
spark url in my launch command. And this error is gone.

Thanks for pointing out possible places for troubleshooting

Regards
Bala

On 02-Nov-2015 3:15 pm, "Balachandar R.A." <balachandar...@gmail.com
<mailto:balachandar...@gmail.com>> wrote:

    No.. I am not using yarn. Yarn is not running in my cluster. So, it
    is standalone one.

    Regards
    Bala

    On 02-Nov-2015 3:11 pm, "Jean-Baptiste Onofré" <j...@nanthrax.net
    <mailto:j...@nanthrax.net>> wrote:

        Just to be sure: you use yarn cluster (not standalone), right ?

        Regards
        JB

        On 11/02/2015 10:37 AM, Balachandar R.A. wrote:

            Yes. In two different places I use spark://

            1. In my code, while creating spark configuration, I use the
            code below

            val sConf = new
            SparkConf().setAppName("Dummy").setMaster("spark://<myHost>:7077")
            val sConf = val sc = new SparkContext(sConf)


            2. I run the job using the command below

            spark-submit  --class org.myjob  --jars myjob.jar
            spark://<myHost>:7077
            myjob.jar

            regards
            Bala


            On 2 November 2015 at 14:59, Romi Kuntsman <r...@totango.com
            <mailto:r...@totango.com>
            <mailto:r...@totango.com <mailto:r...@totango.com>>> wrote:

                 except "spark.master", do you have "spark://" anywhere
            in your code
                 or config files?

                 *Romi Kuntsman*, /Big Data Engineer/_
                 _
            http://www.totango.com <http://www.totango.com/>

                 On Mon, Nov 2, 2015 at 11:27 AM, Balachandar R.A.
                 <balachandar...@gmail.com
            <mailto:balachandar...@gmail.com>
            <mailto:balachandar...@gmail.com
            <mailto:balachandar...@gmail.com>>> wrote:


                     ---------- Forwarded message ----------
                     From: "Balachandar R.A." <balachandar...@gmail.com
            <mailto:balachandar...@gmail.com>
                     <mailto:balachandar...@gmail.com
            <mailto:balachandar...@gmail.com>>>
                     Date: 02-Nov-2015 12:53 pm
                     Subject: Re: Error : - No filesystem for scheme: spark
                     To: "Jean-Baptiste Onofré" <j...@nanthrax.net
            <mailto:j...@nanthrax.net>
                     <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>>
                     Cc:

                      > HI JB,
                      > Thanks for the response,
                      > Here is the content of my spark-defaults.conf
                      >
                      >
                      > # Default system properties included when
            running spark-submit.
                      > # This is useful for setting default
            environmental settings.
                      >
                      > # Example:
                      >  spark.master                     spark://fdoat:7077
                      > # spark.eventLog.enabled           true
                      >  spark.eventLog.dir
            /home/bala/spark-logs
                      > # spark.eventLog.dir
              hdfs://namenode:8021/directory
                      > # spark.serializer
                     org.apache.spark.serializer.KryoSerializer
                      > # spark.driver.memory              5g
                      > # spark.executor.extraJavaOptions
            -XX:+PrintGCDetails
                     -Dkey=value -Dnumbers="one two three"
                      >
                      >
                      > regards
                      > Bala


                      >
                      > On 2 November 2015 at 12:21, Jean-Baptiste Onofré
                     <j...@nanthrax.net <mailto:j...@nanthrax.net>
            <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote:
                      >>
                      >> Hi,
                      >>
                      >> do you have something special in
            conf/spark-defaults.conf
                     (especially on the eventLog directory) ?
                      >>
                      >> Regards
                      >> JB
                      >>
                      >>
                      >> On 11/02/2015 07:48 AM, Balachandar R.A. wrote:
                      >>>
                      >>> Can someone tell me at what point this error
            could come?
                      >>>
                      >>> In one of my use cases, I am trying to use
            hadoop custom
                     input format.
                      >>> Here is my code.
                      >>>
                      >>> |valhConf:Configuration=sc.hadoopConfiguration
                      >>>

            
hConf.set("fs.hdfs.impl",classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)hConf.set("fs.file.impl",classOf[org.apache.hadoop.fs.LocalFileSystem].getName)varjob
                      >>>

            
=newJob(hConf)FileInputFormat.setInputPaths(job,newPath("hdfs:///user/bala/MyBinaryFile"));varhRDD
                      >>>

            
=newNewHadoopRDD(sc,classOf[RandomAccessInputFormat],classOf[IntWritable],classOf[BytesWritable],job.getConfiguration())valcount
                      >>>

            
=hRDD.mapPartitionsWithInputSplit{(split,iter)=>myfuncPart(split,iter)}|
                      >>>
                      >>> |The moment I invoke
            mapPartitionsWithInputSplit() method,
                     I get the
                      >>> below error in my spark-submit launch|
                      >>>
                      >>> |
                      >>> |
                      >>>
                      >>> |15/10/3011:11:39WARN
            scheduler.TaskSetManager:Losttask
                     0.0in stage
                      >>> 0.0(TID
                     0,40.221.94.235):java.io
            <http://java.io>.IOException:NoFileSystemforscheme:spark
                      >>> at
                      >>>

            
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)at
                      >>>

            
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)at
                      >>>
            org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)|
                      >>>
                      >>> Any help here to move towards fixing this will
            be of great help
                      >>>
                      >>>
                      >>>
                      >>> Thanks
                      >>>
                      >>> Bala
                      >>>
                      >>
                      >> --
                      >> Jean-Baptiste Onofré
                      >> jbono...@apache.org
            <mailto:jbono...@apache.org> <mailto:jbono...@apache.org
            <mailto:jbono...@apache.org>>
                      >> http://blog.nanthrax.net
                      >> Talend - http://www.talend.com
                      >>
                      >>

            
---------------------------------------------------------------------
                      >> To unsubscribe, e-mail:
            user-unsubscr...@spark.apache.org
            <mailto:user-unsubscr...@spark.apache.org>
                     <mailto:user-unsubscr...@spark.apache.org
            <mailto:user-unsubscr...@spark.apache.org>>
                      >> For additional commands, e-mail:
            user-h...@spark.apache.org <mailto:user-h...@spark.apache.org>
                     <mailto:user-h...@spark.apache.org
            <mailto:user-h...@spark.apache.org>>
                      >>
                      >




        --
        Jean-Baptiste Onofré
        jbono...@apache.org <mailto:jbono...@apache.org>
        http://blog.nanthrax.net
        Talend - http://www.talend.com

        ---------------------------------------------------------------------
        To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
        <mailto:user-unsubscr...@spark.apache.org>
        For additional commands, e-mail: user-h...@spark.apache.org
        <mailto:user-h...@spark.apache.org>


--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to