spark-shell can't import the default hive-site.xml options probably.

2015-02-01 Thread guxiaobo1982
Hi,


To order to let a local spark-shell connect to  a remote spark stand-alone 
cluster and access  hive tables there, I must put the hive-site.xml file into 
the local spark installation's conf path, but spark-shell even can't import the 
default settings there, I found two errors:
 
property
 
  namehive.metastore.client.connect.retry.delay/name
 
  value5s/value
 
/property
 
property
 
  namehive.metastore.client.socket.timeout/name
 
  value1800s/value
 
/property

Spark-shell try to read 5s and 1800s and integers, they must be changed to 5 
and 1800 to let spark-shell work, It's suggested to be fixed in future versions.

Re: spark-shell can't import the default hive-site.xml options probably.

2015-02-01 Thread Denny Lee
Cool!  For all the times i had been modifying the hive-site.xml I had only
propped in the integer values - learn something new every day, eh?!


On Sun Feb 01 2015 at 9:36:23 AM Ted Yu yuzhih...@gmail.com wrote:

 Looking at common/src/java/org/apache/hadoop/hive/conf/HiveConf.java :


 METASTORE_CLIENT_CONNECT_RETRY_DELAY(hive.metastore.client.connect.retry.delay,
 1s,
 new TimeValidator(TimeUnit.SECONDS),
 Number of seconds for the client to wait between consecutive
 connection attempts),

 It seems having the 's' suffix is legitimate.

 On Sun, Feb 1, 2015 at 9:14 AM, Denny Lee denny.g@gmail.com wrote:

 I may be missing something here but typically when the hive-site.xml
 configurations do not require you to place s within the configuration
 itself.  Both the retry.delay and socket.timeout values are in seconds so
 you should only need to place the integer value (which are in seconds).


 On Sun Feb 01 2015 at 2:28:09 AM guxiaobo1982 guxiaobo1...@qq.com
 wrote:

 Hi,

 To order to let a local spark-shell connect to  a remote spark
 stand-alone cluster and access  hive tables there, I must put the
 hive-site.xml file into the local spark installation's conf path, but
 spark-shell even can't import the default settings there, I found two
 errors:

 property

   namehive.metastore.client.connect.retry.delay/name

   value5s/value

 /property

 property

   namehive.metastore.client.socket.timeout/name

   value1800s/value

 /property
 Spark-shell try to read 5s and 1800s and integers, they must be changed
 to 5 and 1800 to let spark-shell work, It's suggested to be fixed in future
 versions.





Re: spark-shell can't import the default hive-site.xml options probably.

2015-02-01 Thread Denny Lee
I may be missing something here but typically when the hive-site.xml
configurations do not require you to place s within the configuration
itself.  Both the retry.delay and socket.timeout values are in seconds so
you should only need to place the integer value (which are in seconds).

On Sun Feb 01 2015 at 2:28:09 AM guxiaobo1982 guxiaobo1...@qq.com wrote:

 Hi,

 To order to let a local spark-shell connect to  a remote spark stand-alone
 cluster and access  hive tables there, I must put the hive-site.xml file
 into the local spark installation's conf path, but spark-shell even can't
 import the default settings there, I found two errors:

 property

   namehive.metastore.client.connect.retry.delay/name

   value5s/value

 /property

 property

   namehive.metastore.client.socket.timeout/name

   value1800s/value

 /property
 Spark-shell try to read 5s and 1800s and integers, they must be changed to
 5 and 1800 to let spark-shell work, It's suggested to be fixed in future
 versions.



Re: spark-shell can't import the default hive-site.xml options probably.

2015-02-01 Thread Ted Yu
Looking at common/src/java/org/apache/hadoop/hive/conf/HiveConf.java :


METASTORE_CLIENT_CONNECT_RETRY_DELAY(hive.metastore.client.connect.retry.delay,
1s,
new TimeValidator(TimeUnit.SECONDS),
Number of seconds for the client to wait between consecutive
connection attempts),

It seems having the 's' suffix is legitimate.

On Sun, Feb 1, 2015 at 9:14 AM, Denny Lee denny.g@gmail.com wrote:

 I may be missing something here but typically when the hive-site.xml
 configurations do not require you to place s within the configuration
 itself.  Both the retry.delay and socket.timeout values are in seconds so
 you should only need to place the integer value (which are in seconds).


 On Sun Feb 01 2015 at 2:28:09 AM guxiaobo1982 guxiaobo1...@qq.com wrote:

 Hi,

 To order to let a local spark-shell connect to  a remote spark
 stand-alone cluster and access  hive tables there, I must put the
 hive-site.xml file into the local spark installation's conf path, but
 spark-shell even can't import the default settings there, I found two
 errors:

 property

   namehive.metastore.client.connect.retry.delay/name

   value5s/value

 /property

 property

   namehive.metastore.client.socket.timeout/name

   value1800s/value

 /property
 Spark-shell try to read 5s and 1800s and integers, they must be changed
 to 5 and 1800 to let spark-shell work, It's suggested to be fixed in future
 versions.