Great, Thanks.
On Sun, May 29, 2016 at 12:38 AM, Chris Fregly wrote:
> btw, here's a handy Spark Config Generator by Ewan Higgs in in Gent,
> Belgium:
>
> code: https://github.com/ehiggs/spark-config-gen
>
> demo: http://ehiggs.github.io/spark-config-gen/
>
> my recent tweet
btw, here's a handy Spark Config Generator by Ewan Higgs in in Gent,
Belgium:
code: https://github.com/ehiggs/spark-config-gen
demo: http://ehiggs.github.io/spark-config-gen/
my recent tweet on this:
https://twitter.com/cfregly/status/736631633927753729
On Sat, May 28, 2016 at 10:50 AM, Mich
hang on. Free is telling me you have 8GB of memory. I was under the
impression that you had 4GB of RAM :)
So with no app you have 3.99GB free ~ 4GB
1st app takes 428MB of memory and the second is 425MB so pretty lean apps
The question is the apps that I run take 2-3GB each. But your mileage
ran these from muliple bash shell for now, probably a multi threaded python
script would do , memory and resource allocations are seen as submitted
parameters
*say before running any applications . *
[root@fos-elastic02 ~]# /usr/bin/free
total used free shared
OK that is good news. So briefly how do you kick off spark-submit for each
(or sparkConf). In terms of memory/resources allocations.
Now what is the output of
/usr/bin/free
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Yes Mich,
They are currently emitting the results parallely,http://localhost:4040
& http://localhost:4041 , i also see the monitoring from these URL's,
On Sat, May 28, 2016 at 10:37 PM, Mich Talebzadeh wrote:
> ok they are submitted but the latter one 14302 is
ok they are submitted but the latter one 14302 is it doing anything?
can you check it with jmonitor or the logs created
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Thanks Ted,
Thanks Mich, yes i see that i can run two applications by submitting
these, probably Driver + Executor running in a single JVM . In-Process
Spark.
wondering if this can be used in production systems, the reason for me
considering local instead of standalone cluster mode is purely
Ok so you want to run all this in local mode. In other words something like
below
${SPARK_HOME}/bin/spark-submit \
--master local[2] \
--driver-memory 2G \
--num-executors=1 \
--executor-memory=2G \
Sujeet:
Please also see:
https://spark.apache.org/docs/latest/spark-standalone.html
On Sat, May 28, 2016 at 9:19 AM, Mich Talebzadeh
wrote:
> Hi Sujeet,
>
> if you have a single machine then it is Spark standalone mode.
>
> In Standalone cluster mode Spark allocates
Hi Sujeet,
if you have a single machine then it is Spark standalone mode.
In Standalone cluster mode Spark allocates resources based on cores. By
default, an application will grab all the cores in the cluster.
You only have one worker that lives within the driver JVM process that you
start when
://apache-spark-user-list.1001560.n3.nabble.com/local-Vs-Standalonecluster-production-deployment-tp27042.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
Hi,
I have a question w.r.t production deployment mode of spark,
I have 3 applications which i would like to run independently on a single
machine, i need to run the drivers in the same machine.
The amount of resources i have is also limited, like 4- 5GB RAM , 3 - 4
cores.
For deployment in
13 matches
Mail list logo