I think recommended use will be creating a dataframe using hbase as source.
Then you can run any SQL on that DF.
In 1.2 you can create a base rdd and then apply schema in the same manner
On 21 Apr 2015 03:12, "Jeetendra Gangele" <gangele...@gmail.com> wrote:

> Thanks for reply.
>
> Does phoenix using inside Spark will be useful?
>
> what is the best way to bring data from Hbase into Spark in terms
> performance of application?
>
> Regards
> Jeetendra
>
> On 20 April 2015 at 20:49, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> To my knowledge, Spark SQL currently doesn't provide range scan
>> capability against hbase.
>>
>> Cheers
>>
>>
>>
>> > On Apr 20, 2015, at 7:54 AM, Jeetendra Gangele <gangele...@gmail.com>
>> wrote:
>> >
>> > HI All,
>> >
>> > I am Querying Hbase and combining result and using in my spake job.
>> > I am querying hbase using Hbase client api inside my spark job.
>> > can anybody suggest me will Spark SQl will be fast enough and provide
>> range of queries?
>> >
>> > Regards
>> > Jeetendra
>> >
>>
>
>
>
>
>

Reply via email to