: Nicholas Hakobian
Date: Friday, December 29, 2017 at 8:10 PM
To: "Lalwani, Jayesh"
Cc: "user@spark.apache.org"
Subject: Re: Subqueries
This sounds like a perfect example of using windowing functions. Have you tried
something like the following:
select ACCT_ID, CR_RVKD_S
com> wrote:
> I have a table, and I want to find the latest records in the table. The
> table has a column called instnc_id that is incremented everyday. So, I
> want to find the records that have the max instnc_id.
>
>
>
> I am trying to do this using subqueries, but it gives
I have a table, and I want to find the latest records in the table. The table
has a column called instnc_id that is incremented everyday. So, I want to find
the records that have the max instnc_id.
I am trying to do this using subqueries, but it gives me an error. For example,
when I try this
wrote:
> Hi,
>
> From the Spark 2.0 Release webinar what I understood is, the newer version
> have significantly expanded the SQL capabilities of Spark, with the
> introduction of a new ANSI SQL parser and support for Subqueries. It also
> says, Spark 2.0 can run all the 99 TPC
Hi,
>From the Spark 2.0 Release webinar what I understood is, the newer version
have significantly expanded the SQL capabilities of Spark, with the
introduction of a new ANSI SQL parser and support for Subqueries. It also
says, Spark 2.0 can run all the 99 TPC-DS queries, which require many
kasireddy wrote:
> These tables are stored in hdfs as parquet. Can sqlContext be applied for
> the subQueries?
>
> On Tue, Feb 23, 2016 at 5:31 PM, Mich Talebzadeh <
> mich.talebza...@cloudtechnologypartners.co.uk> wrote:
>
>> Assuming these are all in Hive, you ca
These tables are stored in hdfs as parquet. Can sqlContext be applied for
the subQueries?
On Tue, Feb 23, 2016 at 5:31 PM, Mich Talebzadeh <
mich.talebza...@cloudtechnologypartners.co.uk> wrote:
> Assuming these are all in Hive, you can either use spark-sql or
> spark-shell.
>
&g
Hi,
How do I join multiple tables and use subqueries in Spark SQL using
sqlContext? Can I do this using sqlContext or do I have to use HiveContext
for the same?
Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-join-multiple-tables-and-use
e.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: SparkSQL - No support for subqueries in 1.2-snapshot?
This is not supported yet. It would be great if you could open a JIRA (though
I think apache JIRA is down ATM).
On Tue, Nov 4, 2014 a
>
> customerid
>
> 2
>
> 3
>
> TOK_TABLE_OR_COL
>
> customerid
>
> " +
>
>
>
> org.apache.spark.sql.hive.HiveQl$.nodeToExpr(HiveQl.scala:1098)
>
>
>
> at scala.sys.package$.error(package.scala
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
Are subqueries in predicates just not supported in 1.2? I think I’m seeing the
same issue as:
http://apache-spark-user-list.1001560.n3.nabbl
11 matches
Mail list logo