Jerry and Mich,
thanks. I will look a bit more into this. probably an interesting and
useful feature to have.
Demai
On Sat, Oct 22, 2016 at 12:02 PM, Jerry He wrote:
> Hi, Demai
>
> If you think something helpful can be done within HBase, feel free to
> propose on the
unsubscribe
On Thu, Oct 20, 2016 at 8:46 AM Anil wrote:
> HI,
>
> I am loading hbase table into an in-memory db to support filter, ordering
> and pagination.
>
> I am scanning region and inserting data into in-memory db. each region scan
> is done in single thread so each
Anil,
You could also try Splice Machine (Open Source).
Regards,
John Leach
> On Oct 21, 2016, at 4:05 AM, Anil wrote:
>
> Thank you Ram. Now its clear. i will take a look at it.
>
> Thanks again.
>
> On 21 October 2016 at 14:25, ramkrishna vasudevan <
>
Hi, Demai
If you think something helpful can be done within HBase, feel free to
propose on the JIRA.
Jerry
On Fri, Oct 21, 2016 at 2:41 PM, Mich Talebzadeh
wrote:
> Hi Demai,
>
> As I understand you want to use Hbase as the real time layer and Hive Data
> Warehouse
It is based on the number of live regions.
Jerry
On Fri, Oct 21, 2016 at 7:50 AM, Vadim Vararu
wrote:
> Hi guys,
>
> I'm trying to run the importTSV job and to write the result into a remote
> HDFS. Isn't it supposed to write data concurrently? Asking cause i get the
Hi,
I have a Hbase table that is populated via
org.apache.hadoop.hbase.mapreduce.ImportTsv
through bulk load ever 15 minutes. This works fine.
In Phoenix I created a view on this table
jdbc:phoenix:rhes564:2181> create index marketDataHbase_idx on
"marketDataHbase" ("price_info"."ticker",