ou can patch it in your own branch. In Spark
2.0, the simple SQL Parser will be replaced by HQL Parser, so it will not be
the problem then. Hao From: Yi Zhang [mailto:zhangy...@yahoo.com.INVALID]
Sent: Wednesday, December 30, 2015 11:41 AM
To: User
Subject: Does Spark SQL support rollup
ot;sum"))
But in my scenario, I'd better use sql syntax in SqlContext to support rollup
it seems like what HqlContext does. Any suggestion?
Thanks.
Regards,Yi Zhang
I am not sure what happen. According to your snapshot, it just shows warning
message instead of error. But I suggest you can try to use maven with: mvn
idea:idea.
On Monday, May 25, 2015 2:48 PM, huangzheng <1106944...@qq.com> wrote:
Hi all I want to learn spark source code
rec
Hi all,
I wanted to join the data frame based on spark sql in IntelliJ, and wrote these
code lines as below:df1.as('first).join(df2.as('second), $"first._1" ===
$"second._1")
IntelliJ reported the error for $ and === in red colour.
I found $ and === are defined as implicit conversion in
org.apa
Hi all,
I wanted to join the data frame based on spark sql in IntelliJ, and wrote these
code lines as below:df1.as('first).join(df2.as('second), $"first._1" ===
$"second._1")
IntelliJ reported the error for $ and === in red colour.
I found $ and === are defined as implicit conversion in
org.apa
the reason. Who can help me?
On Friday, May 15, 2015 4:06 PM, Yi Zhang
wrote:
Hi all,
I run start-master.sh to start standalone Spark with
spark://192.168.1.164:7077. Then, I use this command as below, and it's
OK:./bin/spark-shell --master spark://192.168.1.164:7077
The console
Hi all,
I run start-master.sh to start standalone Spark with
spark://192.168.1.164:7077. Then, I use this command as below, and it's
OK:./bin/spark-shell --master spark://192.168.1.164:7077
The console print correct message, and Spark context had been initialised
correctly.
However, when I run
ailStyle27 {color:#1F497D;}#yiv2190097982
.yiv2190097982MsoChpDefault {font-size:10.0pt;} _filtered #yiv2190097982
{margin:72.0pt 90.0pt 72.0pt 90.0pt;}#yiv2190097982
div.yiv2190097982WordSection1 {}#yiv2190097982 Yes. From: Yi Zhang
[mailto:zhangy...@yahoo.com]
Sent: Friday, May 15, 2015 2:51 PM
0pt 72.0pt 90.0pt;}#yiv2822675239
div.yiv2822675239WordSection1 {}#yiv2822675239 Spark SQL just take the JDBC as
a new data source, the same as we need to support loading data from a .csv or
.json. From: Yi Zhang [mailto:zhangy...@yahoo.com.INVALID]
Sent: Friday, May 15, 2015 2:30 PM
To: Us
Hi All,
Comparing direct access via JDBC, what's the advantage features of Spark
SQL(JDBC) to access external data source?
Any tips are welcome! Thanks.
Regards,Yi
performance like there?
On 4 May 2015, at 16:04, Robin East wrote:
What query are you running. It may be the case that your query requires
PosgreSQL to do a large amount of work before identifying the first n rows
On 4 May 2015, at 15:52, Yi Zhang wrote:
I am trying to query PostgreSQL using
I am trying to query PostgreSQL using LIMIT(n) to reduce memory size and
improve query performance, but I found it took long time as same as querying
not using LIMIT. It let me confused. Anybody know why?
Thanks.
Regards,Yi
12 matches
Mail list logo