Hi qingyang,

1. You do not need to install shark on every node.
2. Not really sure..it's just a warning so I'd see if it works despite it
3. You need to provide the actual hdfs path, e.g.
hdfs://namenode/user2/vols.csv, see this thread
https://groups.google.com/forum/#!topic/tachyon-users/3Da4zcHKBbY

Lastly as your questions are more shark than spark related there is a
separate shark user group that might be more helpful.
Hope this helps


On Thu, Mar 6, 2014 at 3:25 AM, qingyang li <liqingyang1...@gmail.com>wrote:

> just a addition for #3,  i have such configuration in shark-env.sh:
> ----
> export HADOOP_HOME=/usr/lib/hadoop
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export HIVE_HOME=/usr/lib/hive/
> #export HIVE_CONF_DIR=/etc/hive/conf
> export MASTER=spark://bigdata001:7077
> -----
>
>
> 2014-03-06 16:20 GMT+08:00 qingyang li <liqingyang1...@gmail.com>:
>
> hi, spark community,  i have setup 3 nodes cluster using spark 0.9 and
>> shark 0.9,  My question is :
>> 1. is there any neccessary to install shark on every node since it is a
>> client to use spark service ?
>> 2. when i run shark-withinfo, i got such warning:
>>  WARN shark.SharkEnv: Hive Hadoop shims detected local mode, but Shark is
>> not running locally.
>> WARN shark.SharkEnv: Setting mapred.job.tracker to 'Spark_1394093746930'
>> (was 'local')
>> what does this log want to tell us ?
>> is it a problem to run shark?
>> 3. i want to load data from hdfs , so i run "LOAD DATA INPATH
>> '/user/root/input/test.txt' into table b; " , but i got this error:No files
>> matching path file:/user/root/input/test.txt , but this file exists on
>> hdfs.
>>
>> thanks.
>>
>
>

Reply via email to