Hello,
Which is the fastest way to dump the content of Hbase table to Hdfs? is
it possible to use the hbase snapshot + Spark to do this?
now we have already use the hbase snapshot + mapreduce-v2(does not via the
Htable) to convert the HFiles to OrcFile, but we found the 'spilling map
output'
hello list,
is there a way to load the existing data(HFiles) from CDH4.3.0 to CDH5.4.0?
we use the complete bulkload utility which reference the link:
http://hbase.apache.org/0.94/book/ops_mgt.html#completebulkload
the command: hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
Hello list,
Any one can give me some follow up information about this issue HBASE-3529,
I'm wondering it has more than 2 years no update.
Best,
--
*Rick Dong*
working on this issue.
Can you tell us your use case ?
Have you looked at http://www.lilyproject.org/lily/index.html ?
Thanks
On Thu, May 30, 2013 at 7:06 PM, dong.yajun dongt...@gmail.com wrote:
Hello list,
Any one can give me some follow up information about this issue
HBASE-3529
list, I'd like to unsubscribe this mail list.
thanks.
--
*Ric Dong *
Hi Jack
you can use solr over hbase, solr stores the index data, and hbase stores
the actual data .
Thanks
Rick
On Wed, Jun 6, 2012 at 9:56 PM, Otis Gospodnetic otis_gospodne...@yahoo.com
wrote:
https://issues.apache.org/jira/browse/HBASE-3529
Otis
Performance Monitoring for Solr
Hi NNever
If you find any issues, please let us known, thanks.
On Wed, Jun 6, 2012 at 5:09 PM, NNever nnever...@gmail.com wrote:
I'm sorry, the log4j now is WARN, not INFO
2012/6/6 NNever nnever...@gmail.com
We currently run in INFO mode.
It actully did the split, but I cannot find any