Re: Where to set properties for the retainedJobs/Stages?

2016-04-04 Thread Max Schmidt
cs/latest/configuration.html#dynamically-loading-spark-properties > w.r.t. spark-defaults.conf > > On Fri, Apr 1, 2016 at 12:06 PM, Max Schmidt <m...@datapath.io > <mailto:m...@datapath.io>> wrote: > > Yes but doc doesn't say any word for which variable the configs &g

Re: Where to set properties for the retainedJobs/Stages?

2016-04-01 Thread Max Schmidt
2016-04-01 18:58, schrieb Ted Yu: You can set them in spark-defaults.conf See also https://spark.apache.org/docs/latest/configuration.html#spark-ui [1] On Fri, Apr 1, 2016 at 8:26 AM, Max Schmidt <m...@datapath.io> wrote: Can somebody tell me the interaction between the prop

Where to set properties for the retainedJobs/Stages?

2016-04-01 Thread Max Schmidt
Can somebody tell me the interaction between the properties: spark.ui.retainedJobs spark.ui.retainedStages spark.history.retainedApplications I know from the bugtracker, that the last one describes the number of applications the history-server holds in memory. Can I set the properties in the

Re: No active SparkContext

2016-03-31 Thread Max Schmidt
Just to mark this question closed - we expierienced an OOM-Exception on the Master, which we didn't see on the Driver, but made him crash. Am 24.03.2016 um 09:54 schrieb Max Schmidt: > Hi there, > > we're using with the java-api (1.6.0) a ScheduledExecutor that > continuously execute

Re: No active SparkContext

2016-03-24 Thread Max Schmidt
Schmidt <m...@datapath.io> wrote: Am 24.03.2016 um 10:34 schrieb Simon Hafner: 2016-03-24 9:54 GMT+01:00 Max Schmidt <m...@datapath.io>: > we're using with the java-api (1.6.0) a ScheduledExecutor that continuously > executes a SparkJob to a standalone cluster. I'd recommend

Re: apache spark errors

2016-03-24 Thread Max Schmidt
> > 632275 [Executor task launch worker-12] ERROR > org.apache.spark.executor.Executor - Managed memory leak > detected; size = 5602240 bytes, TID = 47709 > > 644989 [Executor task launch worker-13] ERROR > org.apache.spark.executor.

Re: No active SparkContext

2016-03-24 Thread Max Schmidt
Am 24.03.2016 um 10:34 schrieb Simon Hafner: > 2016-03-24 9:54 GMT+01:00 Max Schmidt <m...@datapath.io > <mailto:m...@datapath.io>>: > > we're using with the java-api (1.6.0) a ScheduledExecutor that > continuously > > executes a SparkJob to a standalone cluster.

No active SparkContext

2016-03-24 Thread Max Schmidt
- finished. Any guess? -- *Max Schmidt, Senior Java Developer* | m...@datapath.io <mailto:m...@datapath.io> | LinkedIn <https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/> Datapath.io Decreasing AWS latency. Your traffic optimized. Datapath.io GmbH Mainz | HRB Nr. 46222 Sebastian Spies, CEO

Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt
Hi there, we're haveing a strange Problem here using Spark in a Java application using the JavaSparkContext: We are using java.util.logging.* for logging in our application with 2 Handlers (Console + Filehandler): {{{ .handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler

Re: Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt
I checked the handlers of my rootLogger (java.util.logging.Logger.getLogger("")) which where a Console and a FileHandler. After the JavaSparkContext was created, the rootLogger only contained a 'org.slf4j.bridge.SLF4JBridgeHandler'. Am 11.01.2016 um 10:56 schrieb Max Sch

Re: Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt
Okay, i solved this problem... It was my own fault by setting the RootLogger for the java.util.logging*. An explicit name for the handler/level solved it. Am 2016-01-11 12:33, schrieb Max Schmidt: I checked the handlers of my rootLogger (java.util.logging.Logger.getLogger("")) w