But the whole idea of storing large file on HDFS will be defeated, right?
Why do you think we need to bring it back to HBase?

On Thu, Feb 18, 2016 at 10:23 PM, Jameson Li <hovlj...@gmail.com> wrote:

> maybe U can parse the HDFS image file, then transform them as the Hfile,
> and load into hbase Tables.
> --remember to partition the hbase table
>
> 2016-02-18 7:40 GMT+08:00 Arun Patel <arunp.bigd...@gmail.com>:
>
> > I would like to store large documents (over 100 MB) on HDFS and insert
> > metadata in HBase.
> >
> > 1) Users will use HBase REST API for PUT and GET requests for storing and
> > retrieving documents. In this case, how to PUT and GET documents to/from
> > HDFS?What are the recommended ways for storing and accessing document
> > to/from HDFS that provides optimum performance?
> >
> > Can you please share any sample code?  or a Github project?
> >
> > 2)  What are the performance issues I need to know?
> >
> > Regards,
> > Arun
> >
>
>
>
> --
>
>
> Thanks & Regards,
> 李剑 Jameson Li
> Focus on Hadoop,Mysql
>

Reply via email to