Oh yeah, they picked up changes after restart, thanks!

On Thu, Feb 5, 2015 at 8:13 PM, Charles Feduke <charles.fed...@gmail.com>
wrote:

> I don't see anything that says you must explicitly restart them to load
> the new settings, but usually there is some sort of signal trapped [or
> brute force full restart] to get a configuration reload for most daemons.
> I'd take a guess and use the $SPARK_HOME/sbin/{stop,start}-slaves.sh
> scripts on your master node and see. (
> http://spark.apache.org/docs/1.2.0/spark-standalone.html#cluster-launch-scripts
> )
>
> I just tested this out on my integration EC2 cluster and got odd results
> for stopping the workers (no workers found) but the start script... seemed
> to work. My integration cluster was running and functioning after executing
> both scripts, but I also didn't make any changes to spark-env either.
>
> On Thu Feb 05 2015 at 9:49:49 PM Kane Kim <kane.ist...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm trying to change setting as described here:
>> http://spark.apache.org/docs/1.2.0/ec2-scripts.html
>> export SPARK_WORKER_CORES=6
>>
>> Then I ran  ~/spark-ec2/copy-dir /root/spark/conf to distribute to
>> slaves, but without any effect. Do I have to restart workers?
>> How to do that with spark-ec2?
>>
>> Thanks.
>>
>

Reply via email to