One more thing.

If you run a file interactively and you are interested in capturing the
output in a file plus seeing the output on the screen,  you can use* tee -a
*

ENVFILE=$HOME/dba/bin/environment.ksh
if [[ -f $ENVFILE ]]
then
        . $ENVFILE
else
        echo "Abort: $0 failed. No environment file ( $ENVFILE ) found"
        exit 1
fi
#
FILE_NAME=`basename $0 .ksh`
LOG_FILE=${LOGDIR}/${FILE_NAME}.log
[ -f ${LOG_FILE} ] && rm -f ${LOG_FILE}

echo `date` " ""======= Started to create oraclehadoop.sales =======" *|
tee -a  ${LOG_FILE}*

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 21 July 2016 at 07:53, <luohui20...@sina.com> wrote:

> got it, I've added it into my notes.
> thank you
>
>
> --------------------------------
>
> Thanks&amp;Best regards!
> San.Luo
>
> ----- 原始邮件 -----
> 发件人:Mich Talebzadeh <mich.talebza...@gmail.com>
> 收件人:Chanh Le <giaosu...@gmail.com>
> 抄送人:罗辉 <luohui20...@sina.com>, focus <focushe...@qq.com>, user <
> user@spark.apache.org>
> 主题:Re: run spark apps in linux crontab
> 日期:2016年07月21日 12点01分
>
> you should source the environment file before or in the file. for example
> this one is ksh type
>
> 0,5,10,15,20,25,30,35,40,45,50,55 * * * *
> (/home/hduser/dba/bin/send_messages_to_Kafka.ksh >
> /var/tmp/send_messages_to_Kafka.err 2>&1)
>
> in that shell it sources the environment file
>
> #
> # Main Section
> #
> ENVFILE=/home/hduser/.kshrc
> if [[ -f $ENVFILE ]]
> then
>         . $ENVFILE
> else
>         echo "Abort: $0 failed. No environment file ( $ENVFILE ) found"
>         exit 1
> fi
>
> Or simply in your case say
>
> 0,5,10,15,20,25,30,35,40,45,50,55 * * * *  
> *(/home/hduser/.bashrc;*$SPARK_HOME/bin/spark-submit.......
> > /var/tmp/somefile 2>&1)
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 21 July 2016 at 04:38, Chanh Le <giaosu...@gmail.com> wrote:
>
> you should you use command.sh | tee file.log
>
> On Jul 21, 2016, at 10:36 AM, <luohui20...@sina.com> <luohui20...@sina.com>
> wrote:
>
>
> thank you focus, and all.
> this problem solved by adding a line ". /etc/profile" in my shell.
>
>
> --------------------------------
>
> Thanks&amp;Best regards!
> San.Luo
>
> ----- 原始邮件 -----
> 发件人:"focus" <focushe...@qq.com>
> 收件人:"luohui20001" <luohui20...@sina.com>, "user@spark.apache.org" <
> user@spark.apache.org>
> 主题:Re:run spark apps in linux crontab
> 日期:2016年07月20日 18点11分
>
> Hi, I just meet this problem, too! The reason is crontab runtime doesn't
> have the variables you defined, such as $SPARK_HOME.
> I defined the $SPARK_HOME and other variables in /etc/profile like this:
>
> export $MYSCRIPTS=/opt/myscripts
> export $SPARK_HOME=/opt/spark
>
> then, in my crontab job script daily_job.sh
>
> #!/bin/sh
>
> . /etc/profile
>
> $SPARK_HOME/bin/spark-submit $MYSCRIPTS/fix_fh_yesterday.py
>
> then, in crontab -e
>
> 0 8 * * * /home/user/daily_job.sh
>
> hope this helps~
>
>
>
>
> ------------------ Original ------------------
> *From:* "luohui20001"<luohui20...@sina.com>;
> *Date:* 2016年7月20日(星期三) 晚上6:00
> *To:* "user@spark.apache.org"<user@spark.apache.org>;
> *Subject:* run spark apps in linux crontab
>
> hi guys:
>       I add a spark-submit job into my Linux crontab list by the means
> below ,however none of them works. If I change it to a normal shell script,
> it is ok. I don't quite understand why. I checked the 8080 web ui of my
> spark cluster, no job submitted, and there is not messages in
> /home/hadoop/log.
>       Any idea is welcome.
>
> [hadoop@master ~]$ crontab -e
> 1.
> 22 21 * * * sh /home/hadoop/shellscripts/run4.sh > /home/hadoop/log
>
> and in run4.sh,it wrote:
> $SPARK_HOME/bin/spark-submit --class com.abc.myclass
> --total-executor-cores 10 --jars $SPARK_HOME/lib/MyDep.jar
> $SPARK_HOME/MyJar.jar  > /home/hadoop/log
>
> 2.
> 22 21 * * * $SPARK_HOME/bin/spark-submit --class com.abc.myclass
> --total-executor-cores 10 --jars $SPARK_HOME/lib/MyDep.jar
> $SPARK_HOME/MyJar.jar  > /home/hadoop/log
>
> 3.
> 22 21 * * * /usr/lib/spark/bin/spark-submit --class com.abc.myclass
> --total-executor-cores 10 --jars /usr/lib/spark/lib/MyDep.jar
> /usr/lib/spark/MyJar.jar  > /home/hadoop/log
>
> 4.
> 22 21 * * * hadoop /usr/lib/spark/bin/spark-submit --class com.abc.myclass
> --total-executor-cores 10 --jars /usr/lib/spark/lib/MyDep.jar
> /usr/lib/spark/MyJar.jar  > /home/hadoop/log
>
> --------------------------------
>
> Thanks&amp;Best regards!
> San.Luo
>
>
>
>

Reply via email to