Shiva Prashanth Vallabhaneni would like to recall the message, "spark sql
in-clause problem".
Any comments or statements made in this email are not necessarily those of
Tavant Technologies. The information transmitted is intended only for the
person
uery;
Table coordinates (
Integer X,
Integer Y
)
sparkSqlContext.sql(select * from mytable where key = 1 and (X,Y) IN (select X,
Y from coordinates))
From: onmstester onmstester <onmstes...@zoho.com>
Sent: Wednesday, May 23, 2018 10:33 AM
To: user <user@spark.apache.org>
Subject: spa
I'm reading from this table in cassandra:
Table mytable (
Integer Key,
Integer X,
Interger Y
Using:
sparkSqlContext.sql(select * from mytable where key = 1 and (X,Y) in
((1,2),(3,4)))
Encountered error:
StructType(StructField((X,IntegerType,true),StructField((Y,IntegerType,true))
K-8077] [SQL] Optimization for TreeNodes with large numbers of
>> children
>>
>> From the numbers Michael published, 1 million numbers would still need
>> 250 seconds to parse.
>>
>> On Fri, Dec 4, 2015 at 10:14 AM, Madabhattula Rajesh Kumar <
>> mrajaf...@gmai
A ?
>>>
>>> [SPARK-8077] [SQL] Optimization for TreeNodes with large numbers of
>>> children
>>>
>>> From the numbers Michael published, 1 million numbers would still need
>>> 250 seconds to parse.
>>>
>>> On Fri, Dec 4, 201
Hi,
How to use/best practices "IN" clause in Spark SQL.
Use Case :- Read the table based on number. I have a List of numbers. For
example, 1million.
Regards,
Rajesh
te:
> Hi,
>
> How to use/best practices "IN" clause in Spark SQL.
>
> Use Case :- Read the table based on number. I have a List of numbers. For
> example, 1million.
>
> Regards,
> Rajesh
>
abhattula Rajesh Kumar <
> mrajaf...@gmail.com> wrote:
>
>> Hi,
>>
>> How to use/best practices "IN" clause in Spark SQL.
>>
>> Use Case :- Read the table based on number. I have a List of numbers.
>> For example, 1million.
>>
>> Regards,
>> Rajesh
>>
>
>
)
group by
o_orderpriority
order by
o_orderpriority;
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Exists-Clause-tp17307.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
l_orderkey = o_orderkey
and l_commitdate l_receiptdate
)
group by
o_orderpriority
order by
o_orderpriority;
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Exists-Clause-tp17307
10 matches
Mail list logo