Hi All - I have a four worker node cluster, each with 8GB memory. When I submit
a job, the driver node takes 1gb memory, each worker node only allocates one
executor, also just take 1gb memory. The setting of the job has:
sparkConf
.setExecutorEnv("spark.driver.memory", "6g")
I see. Thanks!
From: Steve Loughran <ste...@hortonworks.com<mailto:ste...@hortonworks.com>>
Date: Friday, October 30, 2015 at 12:03 PM
To: William Li <a-...@expedia.com<mailto:a-...@expedia.com>>
Cc: "Zhang, Jingyu"
<jingyu.zh...@news.com.au<mailto
Thanks for your response. My secret has a back splash (/) so it didn't work...
From: "Zhang, Jingyu"
<jingyu.zh...@news.com.au<mailto:jingyu.zh...@news.com.au>>
Date: Thursday, October 29, 2015 at 5:16 PM
To: William Li <a-...@expedia.com<mailto:a-...@
Hi - I have a simple app running fine with Spark, it reads data from S3 and
performs calculation.
When reading data from S3, I use hadoopConfiguration.set for both
fs.s3n.awsAccessKeyId, and the fs.s3n.awsSecretAccessKey to it has permissions
to load the data from customer sources.
However,
il.com>>
Date: Thursday, October 22, 2015 at 10:36 AM
To: William Li <a-...@expedia.com<mailto:a-...@expedia.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Mav
." Not sure of the
>equivalent in IntelliJ, but it will be updating the same repo IJ sees.
>Try that. The repo definitely has 1.5.1 as you can see.
>
>On Thu, Oct 22, 2015 at 11:44 AM, William Li <a-...@expedia.com> wrote:
>> Thanks Deenar for your response. I am able to
Hi - I tried to download the Spark SQL 2.10 and version 1.5.1 from Intellij
using the maven library:
-Project Structure
-Global Library, click on the + to select Maven Repository
-Type in org.apache.spark to see the list.
-The list result only shows version up to spark-sql_2.10-1.1.1
-I tried