Hi David, but removing setMaster line provokes this error:

org.apache.spark.SparkException: A master URL must be set in your
configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:402)
    at
example.spark.AmazonKafkaConnector$.main(AmazonKafkaConnectorWithMongo.scala:93)
    at
example.spark.AmazonKafkaConnector.main(AmazonKafkaConnectorWithMongo.scala)




Alonso Isidoro Roman
[image: https://]about.me/alonso.isidoro.roman
<https://about.me/alonso.isidoro.roman?promo=email_sig&utm_source=email_sig&utm_medium=email_sig&utm_campaign=external_links>

2016-06-03 18:23 GMT+02:00 David Newberger <david.newber...@wandcorp.com>:

> Alonso, I could totally be misunderstanding something or missing a piece
> of the puzzle however remove .setMaster. If you do that it will run with
> whatever the CDH VM is setup for which in the out of the box default case
> is YARN and Client.
>
> val sparkConf = new SparkConf().setAppName(“Some App thingy thing”)
>
>
>
> From the Spark 1.6.0 Scala API Documentation:
>
>
> https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.SparkConf
>
>
>
>
> “
> Configuration for a Spark application. Used to set various Spark
> parameters as key-value pairs.
>
> Most of the time, you would create a SparkConf object with new SparkConf(),
> which will load values from any spark.* Java system properties set in
> your application as well. In this case, parameters you set directly on the
>  SparkConf object take priority over system properties.
>
> For unit tests, you can also call new SparkConf(false) to skip loading
> external settings and get the same configuration no matter what the system
> properties are.
>
> All setter methods in this class support chaining. For example, you can
> write new SparkConf().setMaster("local").setAppName("My app").
>
> Note that once a SparkConf object is passed to Spark, it is cloned and can
> no longer be modified by the user. Spark does not support modifying the
> configuration at runtime.
>
> “
>
>
>
> *David Newberger*
>
>
>
> *From:* Alonso Isidoro Roman [mailto:alons...@gmail.com]
> *Sent:* Friday, June 3, 2016 10:37 AM
> *To:* David Newberger
> *Cc:* user@spark.apache.org
> *Subject:* Re: About a problem running a spark job in a cdh-5.7.0 vmware
> image.
>
>
>
> Thank you David, so, i would have to change the way that i am creating
>  SparkConf object, isn't?
>
>
>
> I can see in this link
> <http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_running_spark_on_yarn.html#concept_ysw_lnp_h5>
>  that
> the way to run a spark job using YARN is using this kind of command:
>
>
>
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn \
>
> --deploy-mode client SPARK_HOME/lib/spark-examples.jar 10
>
> Can i use this way programmatically? maybe changing setMaster? to something 
> like setMaster("yarn:quickstart.cloudera:8032")?
>
> I have seen the port in this guide: 
> http://www.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_ports_cdh5.html
>
>
>
>
>
>

Reply via email to