Re: Read Full Phoenix Table

2016-07-11 Thread Mohanraj Ragupathiraj
Thank you for your reply. I tried passing the PKs through IN clause. But the number of PKs to match between files and Phoenix table some times can be 70 million and i felt it will be much slower if i use IN clause. May i know how much PKs you passed through IN clause ? On Tue, Jul 12, 2016 at 12:

Re: Read Full Phoenix Table

2016-07-11 Thread Simon Wang
I actually recently did something similar. If you are joining on primary keys, you can do batch query with the IN clause. > On Jul 11, 2016, at 9:05 PM, Mohanraj Ragupathiraj > wrote: > > Hi, > > I have a Scenario in which i have to load a phoenix table as a whole and join > it with multip

Read Full Phoenix Table

2016-07-11 Thread Mohanraj Ragupathiraj
Hi, I have a Scenario in which i have to load a phoenix table as a *whole *and join it with multiple files in Spark. But it takes around 30 minutes just to read 600 million records from the Phoenix table. I feel it is inappropriate to load full table data, as HBase works best for Random lookups.