Hi,

let me take a look. Thanks for the report.

Regards
JB

On 11/22/2017 02:42 PM, lk_hadoop wrote:
hi,all:
    I'm trying livy0.4 with spark2.1
   curl -H "Content-type: application/json" -X POST http://kafka02:8998/sessions -d '{"kind": "spark"}' | python -m json.tool    curl -H "Content-type: application/json" -X POST http://kafka02:8998/sessions/0/statements -d '{"code": "spark.sql(\"show databases\").show"}' | python -m json.tool
    curl http://kafka02:8998/sessions/0/statements/0 | python -m json.tool
   % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
Dload  Upload   Total   Spent Left  Speed
100   235  100 235    0     0 15959      0 --:--:-- --:--:-- --:--:-- 16785
{
     "code": "spark.sql(\"show databases\").show",
     "id": 0,
"output": {
         "data": {
"text/plain": "+------------+\n|databaseName|\n+------------+\n| default|\n+------------+"
},
         "execution_count": 0,
         "status": "ok"
     },
     "progress": 1.0,
     "state": "available"
}
It looks like can't read the metadata,I have config livy with SPARK_HOME,and run under yarn model,the hive-site.xml also cp to SPARK_HOME/conf/.
*but't when I use spark-shell:*
**
**
scala> spark.sql("show databases").show
+-------------+
| databaseName|
+-------------+
| default|
| tpcds_carbon|
|tpcds_carbon2|
| tpcds_indexr|
|tpcds_parquet|
| tpcds_source|
+-------------+
2017-11-22
--------------------------------------------------------------------------------
lk_hadoop

--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Reply via email to