Hi all,
Please see below for a list of upcoming technical sessions
on BigDL and Analytics Zoo (
https://github.com/intel-analytics/analytics-zoo/) this week:
- Engineers from Intel will deliver a 3-hour tutorial Analytics Zoo:
Distributed TensorFlow and Keras on Apache Spark
There's also a driver ui (usually available on port 4040), after running
your code, I assume you are running it on your machine, visit
localhost:4040 and you will get the driver UI.
If you think the driver is running on your master/executor nodes, login to
those machines and do a
netstat
Hello,
Is spark.driver.memory per Job or shared across jobs? You should do load
testing before setting this?
Thanks & regards
Arko
On Sun, Mar 24, 2019 at 3:09 PM Pat Ferrel wrote:
>
> 2 Slaves, one of which is also Master.
>
> Node 1 & 2 are slaves. Node 1 is where I run start-all.sh.
>
>
2 Slaves, one of which is also Master.
Node 1 & 2 are slaves. Node 1 is where I run start-all.sh.
The machines both have 60g of free memory (leaving about 4g for the master
process on Node 1). The only constraint to the Driver and Executors is
spark.driver.memory = spark.executor.memory = 60g
2 Slaves, one of which is also Master.
Node 1 & 2 are slaves. Node 1 is where I run start-all.sh.
The machines both have 60g of free memory (leaving about 4g for the master
process on Node 1). The only constraint to the Driver and Executors is
spark.driver.memory = spark.executor.memory = 60g
I write spark data source v2 in spark 2.3 and I want to support
writeStream. What should I do in order to do so?
my defaultSource class:
class MyDefaultSource extends DataSourceV2 with ReadSupport with
WriteSupport with MicroBatchReadSupport { ..
Which interface is missing?
Hi Pat,
On Sun, Mar 24, 2019 at 1:03 PM Pat Ferrel wrote:
> Thanks, I have seen this many times in my research. Paraphrasing docs: “in
> deployMode ‘cluster' the Driver runs on a Worker in the cluster”
>
> When I look at logs I see 2 executors on the 2 slaves (executor 0 and 1
> with addresses
Thanks, I have seen this many times in my research. Paraphrasing docs: “in
deployMode ‘cluster' the Driver runs on a Worker in the cluster”
When I look at logs I see 2 executors on the 2 slaves (executor 0 and 1
with addresses that match slaves). When I look at memory usage while the
job runs I