@Hao,Because the querying joined more than one table, if I register data frame as temp table, Spark can't disguise which table is correct. I don't how to set dbtable and register temp table. Any suggestion?
On Friday, May 15, 2015 1:38 PM, "Cheng, Hao" <hao.ch...@intel.com> wrote: #yiv0864379581 #yiv0864379581 -- _filtered #yiv0864379581 {font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv0864379581 {font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv0864379581 {panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv0864379581 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv0864379581 {panose-1:2 11 5 3 2 2 4 2 2 4;} _filtered #yiv0864379581 {panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv0864379581 {font-family:Consolas;panose-1:2 11 6 9 2 2 4 3 2 4;} _filtered #yiv0864379581 {panose-1:2 11 5 3 2 2 4 2 2 4;} _filtered #yiv0864379581 {font-family:Georgia;panose-1:2 4 5 2 5 4 5 2 3 3;} _filtered #yiv0864379581 {font-family:Monaco;panose-1:0 0 0 0 0 0 0 0 0 0;}#yiv0864379581 #yiv0864379581 p.yiv0864379581MsoNormal, #yiv0864379581 li.yiv0864379581MsoNormal, #yiv0864379581 div.yiv0864379581MsoNormal {margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv0864379581 a:link, #yiv0864379581 span.yiv0864379581MsoHyperlink {color:blue;text-decoration:underline;}#yiv0864379581 a:visited, #yiv0864379581 span.yiv0864379581MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv0864379581 code {}#yiv0864379581 pre {margin:0cm;margin-bottom:.0001pt;font-size:10.0pt;}#yiv0864379581 span.yiv0864379581HTMLPreformattedChar {font-family:Consolas;}#yiv0864379581 span {}#yiv0864379581 span.yiv0864379581link-enhancr-element {}#yiv0864379581 span.yiv0864379581link-enhancr-view-on-domain {}#yiv0864379581 span.yiv0864379581EmailStyle23 {color:#1F497D;}#yiv0864379581 .yiv0864379581MsoChpDefault {font-size:10.0pt;} _filtered #yiv0864379581 {margin:72.0pt 90.0pt 72.0pt 90.0pt;}#yiv0864379581 div.yiv0864379581WordSection1 {}#yiv0864379581 You need to register the “dataFrame” as a table first and then do queries on it? Do you mean that also failed? From: Yi Zhang [mailto:zhangy...@yahoo.com.INVALID] Sent: Friday, May 15, 2015 1:10 PM To: Yi Zhang; Dev Subject: Re: Does Spark SQL (JDBC) support nest select with current version If I pass the whole statement as dbtable to sqlContext.load() method as below: val query = """ (select t1._salory as salory, |t1._name as employeeName, |(select _name from mock_locations t3 where t3._id = t1._location_id ) as locationName |from mock_employees t1 |inner join mock_locations t2 |on t1._location_id = t2._id |where t1._salory > t2._max_price) EMP """.stripMargin val dataFrame = sqlContext.load("jdbc", Map( "url" -> url, "driver" -> "com.mysql.jdbc.Driver", "dbtable" -> query )) It works. However, I can't invoke sql() method to solve this problem. And why? On Friday, May 15, 2015 11:33 AM, Yi Zhang <zhangy...@yahoo.com.INVALID> wrote: The sql statement is like this: select t1._salory as salory, t1._name as employeeName, (select _name from mock_locations t3 where t3._id = t1._location_id ) as locationName from mock_employees t1 inner join mock_locations t2 on t1._location_id = t2._id where t1._salory > t2._max_price I noticed the issue [SPARK-4226] SparkSQL - Add support for subqueries in predicates - ASF JIRA is still in the progress. And somebody commented it that Spark 1.3 would support it. So I don't know current status for this feature. Thanks. Regards, Yi | | | | | | | | | | [SPARK-4226] SparkSQL - Add support for subqueries in predicates - ASF JIRA java.lang.RuntimeException: Unsupported language features in query: select customerid from sparkbug where customerid in (select customerid from sparkbug where customerid in (2,3)) TOK_QUERY TOK_FROM TOK_TABREF TOK_TABNAME sparkbug TOK_INSERT TOK_DE... | | | | | | View on issues.apache.org | Preview by Yahoo | | | | | | | | | | | | | | |