to enable kryo serializer you just need to pass
`spark.serializer=org.apache.spark.serializer.KryoSerializer`
the `spark.kryo.registrationRequired` controls the following behavior:
Whether to require registration with Kryo. If set to 'true', Kryo will
> throw an exception if an unregistered
You mean share a single spark context across multiple jobs?
https://github.com/spark-jobserver/spark-jobserver does the same
On Mon, Dec 5, 2016 at 9:33 AM, Mich Talebzadeh
wrote:
> Hi,
>
> Has there been any experience using Livy with Spark to share multiple
> Spark
Take a look at https://zeppelin.apache.org
On Tue, Nov 8, 2016 at 11:13 AM, Andrew Holway <
andrew.hol...@otternetworks.de> wrote:
> Hello,
>
> A colleague and I are trying to work out the best way to provide live data
> visualisations based on Spark. Is it possible to explore a dataset in spark
Take a look at https://github.com/spark-jobserver/spark-jobserver or
https://github.com/cloudera/livy
you can launch a persistent spark context and then submit your jobs using a
already running context
On Wed, Nov 2, 2016 at 3:34 AM, Fanjin Zeng
wrote:
> Hi,
>
> I
Have you tried to get number of threads in a running process using `cat
/proc//status` ?
On Sun, Oct 30, 2016 at 11:04 PM, kant kodali wrote:
> yes I did run ps -ef | grep "app_name" and it is root.
>
>
>
> On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang
You can use Cloudera Livy for that https://github.com/cloudera/livy
take a look at this example https://github.com/cloudera/livy#spark-example
On Wed, Oct 26, 2016 at 4:35 AM, Mahender Sarangam <
mahender.bigd...@outlook.com> wrote:
> Hi,
>
> Is there any way to dynamically execute a string
oh, and try to run even smaller executors, i.e. with
`spark.executor.memory` <= 16GiB. I wonder what result you're going to get.
On Sun, Oct 2, 2016 at 1:24 AM, Vadim Semenov <vadim.seme...@datadoghq.com>
wrote:
> > Do you mean running a multi-JVM 'cluster' on the single machine
The question has no connection to spark.
In future, if you use apache mailing lists, use external services to add
screenshots and make sure that your code is formatted so other members'd be
able to read it.
On Fri, Sep 30, 2016 at 11:25 AM, chen yong wrote:
> Hello All,
>
>
The question should be addressed to the oozie community.
As far as I remember, a spark action doesn't have support of env variables.
On Fri, Sep 30, 2016 at 8:11 PM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:
> Hi,
>
>
>
> I am running spark on yarn using oozie.
>
>
>
> When submit
> long[].
> Is it possible to force this specific operation to go off-heap so that it
> can possibly use a bigger page size?
>
>
>
> >Babak
>
>
> *Babak Alipour ,*
> *University of Florida*
>
> On Fri, Sep 30, 2016 at 3:03 PM, Vadim Semenov <
> vadim.seme...@d
, will
> job run in Hadoop cluster ?
> How stable is this API as we will need to implement it in production env.
> Livy looks more promising but still need not matured.
> Have you tested any of them ?
>
> Thanks,
> Abhishek
> Abhishek
>
>
> On Fri, Sep 30, 2016 at 11:39
ad.run(Thread.java:745)
>
> I'm running spark in local mode so there is only one executor, the driver
> and spark.driver.memory is set to 64g. Changing the driver's memory doesn't
> help.
>
> *Babak Alipour ,*
> *University of Florida*
>
> On Fri, Sep 30, 2016 at 2:05 P
There're two REST job servers that work with spark:
https://github.com/spark-jobserver/spark-jobserver
https://github.com/cloudera/livy
On Fri, Sep 30, 2016 at 2:07 PM, ABHISHEK wrote:
> Hello all,
> Have you tried accessing Spark application using Restful web-services?
>
Can you post the whole exception stack trace?
What are your executor memory settings?
Right now I assume that it happens in UnsafeExternalRowSorter ->
UnsafeExternalSorter:insertRecord
Running more executors with lower `spark.executor.memory` should help.
On Fri, Sep 30, 2016 at 12:57 PM,
Add "-Dspark.master=local[*]" to the VM properties of your test run.
On Mon, Sep 26, 2016 at 2:25 PM, Mohit Jaggi wrote:
> I want to use the following API SparkILoop.run(...). I am writing a test
> case as that passes some scala code to spark interpreter and receives
>
I have experience with both Livy & spark-jobserver.
spark-jobserver gives you better API, particularly, if you want to work
within a single spark context.
Livy supports submitting python & R code while spark-jobserver doesn't
support it.
spark-jobserver code is more complex, it actively uses
Hi spark users,
I wonder if it's possible to change executors settings on-the-fly.
I have the following use-case: I have a lot of non-splittable skewed files
in a custom format that I read using a custom Hadoop RecordReader. These
files can be small & huge and I'd like to use only one-two cores
101 - 117 of 117 matches
Mail list logo