I sort of agree but the problem is that some of this should be code.

Some of our ES indexes have 100-200 columns.

Defining which ones are arrays on the command line is going to get ugly
fast.



On Tue, Sep 13, 2016 at 11:50 AM, Sean Owen <so...@cloudera.com> wrote:

> You would generally use --conf to set this on the command line if using
> the shell.
>
>
> On Tue, Sep 13, 2016, 19:22 Kevin Burton <bur...@spinn3r.com> wrote:
>
>> The problem is that without a new spark context, with a custom conf,
>> elasticsearch-hadoop is refusing to read in settings about the ES setup...
>>
>> if I do a sc.stop() , then create a new one, it seems to work fine.
>>
>> But it isn't really documented anywhere and all the existing
>> documentation is now invalid because you get an exception when you try to
>> create a new spark context.
>>
>> On Tue, Sep 13, 2016 at 11:13 AM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> I think this works in a shell but you need to allow multiple spark
>>> contexts
>>>
>>> Spark context Web UI available at http://50.140.197.217:55555
>>> Spark context available as 'sc' (master = local, app id =
>>> local-1473789661846).
>>> Spark session available as 'spark'.
>>> Welcome to
>>>       ____              __
>>>      / __/__  ___ _____/ /__
>>>     _\ \/ _ \/ _ `/ __/  '_/
>>>    /___/ .__/\_,_/_/ /_/\_\   version 2.0.0
>>>       /_/
>>> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
>>> 1.8.0_77)
>>> Type in expressions to have them evaluated.
>>> Type :help for more information.
>>>
>>> scala> import org.apache.spark.SparkContext
>>> import org.apache.spark.SparkContext
>>> scala>  val conf = new SparkConf().setMaster("local[2]").setAppName("
>>> CountingSheep").
>>> *set("spark.driver.allowMultipleContexts", "true")*conf:
>>> org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
>>> scala> val sc = new SparkContext(conf)
>>> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@
>>> 4888425d
>>>
>>>
>>> HTH
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 13 September 2016 at 18:57, Sean Owen <so...@cloudera.com> wrote:
>>>
>>>> But you're in the shell there, which already has a SparkContext for you
>>>> as sc.
>>>>
>>>> On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton <bur...@spinn3r.com>
>>>> wrote:
>>>>
>>>>> I'm rather confused here as to what to do about creating a new
>>>>> SparkContext.
>>>>>
>>>>> Spark 2.0 prevents it... (exception included below)
>>>>>
>>>>> yet a TON of examples I've seen basically tell you to create a new
>>>>> SparkContext as standard practice:
>>>>>
>>>>> http://spark.apache.org/docs/latest/configuration.html#
>>>>> dynamically-loading-spark-properties
>>>>>
>>>>> val conf = new SparkConf()
>>>>>              .setMaster("local[2]")
>>>>>              .setAppName("CountingSheep")val sc = new SparkContext(conf)
>>>>>
>>>>>
>>>>> I'm specifically running into a problem in that ES hadoop won't work
>>>>> with its settings and I think its related to this problme.
>>>>>
>>>>> Do we have to call sc.stop() first and THEN create a new spark context?
>>>>>
>>>>> That works,, but I can't find any documentation anywhere telling us
>>>>> the right course of action.
>>>>>
>>>>>
>>>>>
>>>>> scala> val sc = new SparkContext();
>>>>> org.apache.spark.SparkException: Only one SparkContext may be running
>>>>> in this JVM (see SPARK-2243). To ignore this error, set 
>>>>> spark.driver.allowMultipleContexts
>>>>> = true. The currently running SparkContext was created at:
>>>>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.
>>>>> scala:823)
>>>>> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>>>>> <init>(<console>:15)
>>>>> <init>(<console>:31)
>>>>> <init>(<console>:33)
>>>>> .<init>(<console>:37)
>>>>> .<clinit>(<console>)
>>>>> .$print$lzycompute(<console>:7)
>>>>> .$print(<console>:6)
>>>>> $print(<console>)
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(
>>>>> NativeMethodAccessorImpl.java:62)
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>> java.lang.reflect.Method.invoke(Method.java:497)
>>>>> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>>>>> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>>>>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
>>>>> loadAndRunReq$1.apply(IMain.scala:638)
>>>>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
>>>>> loadAndRunReq$1.apply(IMain.scala:637)
>>>>> scala.reflect.internal.util.ScalaClassLoader$class.
>>>>> asContext(ScalaClassLoader.scala:31)
>>>>> scala.reflect.internal.util.AbstractFileClassLoader.asContext(
>>>>> AbstractFileClassLoader.scala:19)
>>>>>   at org.apache.spark.SparkContext$$anonfun$
>>>>> assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2221)
>>>>>   at org.apache.spark.SparkContext$$anonfun$
>>>>> assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2217)
>>>>>   at scala.Option.foreach(Option.scala:257)
>>>>>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(
>>>>> SparkContext.scala:2217)
>>>>>   at org.apache.spark.SparkContext$.markPartiallyConstructed(
>>>>> SparkContext.scala:2290)
>>>>>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>>>>>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:121)
>>>>>   ... 48 elided
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> We’re hiring if you know of any awesome Java Devops or Linux
>>>>> Operations Engineers!
>>>>>
>>>>> Founder/CEO Spinn3r.com
>>>>> Location: *San Francisco, CA*
>>>>> blog: http://burtonator.wordpress.com
>>>>> … or check out my Google+ profile
>>>>> <https://plus.google.com/102718274791889610666/posts>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>>
>> We’re hiring if you know of any awesome Java Devops or Linux Operations
>> Engineers!
>>
>> Founder/CEO Spinn3r.com
>> Location: *San Francisco, CA*
>> blog: http://burtonator.wordpress.com
>> … or check out my Google+ profile
>> <https://plus.google.com/102718274791889610666/posts>
>>
>>


-- 

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>

Reply via email to