Looks like HBase MOB should be mentioned, since the feature was definitely
introduced with photo files/objects in mind.
Regards,
Kai
From: Grant Overby [mailto:grant.ove...@gmail.com]
Sent: Thursday, September 07, 2017 3:05 AM
To: Ralph Soika
Cc: user@hadoop.apache.org
I'm late to the party, and this isn't a hadoop solution, but apparently
Cassandra is pretty good at this.
https://medium.com/walmartlabs/building-object-store-storing-images-in-cassandra-walmart-scale-a6b9c02af593
On Wed, Sep 6, 2017 at 2:48 PM, Ralph Soika wrote:
> Hi
Hi
I want to thank you all for your answers and your good ideas how to
solve the hadoop "small-file-problem".
Now I would like to briefly summarize your answers and suggested
solutions. First of all I describe once again my general use case:
* An external enterprise application need to
I think mapR-fs is your solution.
From: Anu Engineer [mailto:aengin...@hortonworks.com]
Sent: Tuesday, September 05, 2017 10:33 PM
To: Hayati Gonultas; Alexey Eremihin; Uwe Geercken
Cc: Ralph Soika; user@hadoop.apache.org
Subject: Re: Is Hadoop basically not suitable for a photo archive?
Please
Please take a look at HDFS-7240; we are developing an Object store that uses
HDFS to store the small files. HDFS-7240 or Ozone is designed for the small
file use case.
Caveat Emptor: This is a work in progress, but most of the core functionality
is complete. However, we are not ready for
Kai, this is great. It is well down the path to solving the
small/object-as-file problem. Good show!
*Daemeon C.M. ReiydelleSan Francisco 1.415.501.0198London 44 020 8144 9872*
On Mon, Sep 4, 2017 at 8:56 PM, Zheng, Kai wrote:
> A nice discussion about support of small
A nice discussion about support of small files in Hadoop.
Not sure if this really helps, but I’d like to mention in Intel we actually has
spent some time on this interesting problem domain before and again recently.
We planned to develop a small files compaction optimization in the Smart
I would recommend an object store such as openstack swift as another option.
On Mon, Sep 4, 2017 at 1:09 PM Uwe Geercken wrote:
> just my two cents:
>
> Maybe you can use hadoop for storing and to pack multiple files to use
> hdfs in a smarter way and at the same time store
just my two cents:
Maybe you can use hadoop for storing and to pack multiple files to use hdfs in a smarter way and at the same time store a limited amount of data/photos - based on time - in parallel in a different solution. I assume you won't need high performant access to the whole time
Hi Ralph,
In general Hadoop is able to store such data. And even Har archives can be
used with conjunction with WebHDFS (by passing offset and limit
attributes). What are your reading requirements? FS meta data are not
distributed and reading the data is limited by the HDFS NameNode server
10 matches
Mail list logo