Usually large file in HDFS is split into bulks and store in different
DataNodes.
A map task is assigned to deal with that bulk, I wonder what if the
Structured data(i.e a word) was split into two bulks?
How MapReduce and HDFS deal with this?
Thanks!
Donal
Hi Franck,
This is not HDFS's intended use case and I do not think it will solve
your problem very successfully.
You're probably better off looking at some other technologies.
-Todd
On Thu, Nov 10, 2011 at 4:52 AM, Franck Besnard wrote:
> Hello,
> I'm looking at HDFS as a potential canditate t
Hello,
I'm looking at HDFS as a potential canditate to store files shared by few
hundreds of users.
This idea would be to mount local drive directly linked to the HDFS cloud. The
datanodes would be deployed on each desktops which means up to 500 DNs.
Now that said browsing the web it seems that
Thanks to reply. I've tried both 0.20.2 and 0.20.205 on Fedora Linux 8 and
Fedora Linux 12.
Cheers,
Paolo
On Thu, Nov 10, 2011 at 12:53 PM, Harsh J wrote:
> Hey Paolo,
>
> Against what version/release/distro of Hadoop are you attempting to build
> FUSE-DFS?
>
> On 10-Nov-2011, at 4:46 PM, Pao
Hey Paolo,
Against what version/release/distro of Hadoop are you attempting to build
FUSE-DFS?
On 10-Nov-2011, at 4:46 PM, Paolo Di Tommaso wrote:
> Dear all,
>
> I'm struggling compiling the Fuse_dfs binary component to mount HDFS in a
> Linux file system.
>
> I'm following build steps pu
Dear all,
I'm struggling compiling the Fuse_dfs binary component to mount HDFS in a
Linux file system.
I'm following build steps published here
http://wiki.apache.org/hadoop/MountableHDFS,
but I had no way to make it work. It always stops at the second step,
reporting some Apache Forrest validati