The data itself is eventually store in a form of file. Each blocks of the
file and it replicas are stored in files and directories on different
nodes. The Namenode that keep the information and maintains it about each
file and where its blocks (and replicated blocks exist in the cluster.)

As for the format, it is stored as bytes. In the normal cases you use the
DFS or FileOutputStream classes to  write data and in those instances it is
written in byte form (conversion to bytes i.e. serialize data.) When you
read the data, you use the same counterpart classes like InputStream and
those convert the data from byte to text (i.e. deserialization). Point
being, HDFS is oblivious to the fact whether it was JSON of XML.

This would be more evident if you see the code to read/write from HDFS
(writing example below):
https://sites.google.com/site/hadoopandhive/home/how-to-write-a-file-in-hdfs-using-hadoop

Now on the other hand, if you were using compression or other storage
formats like Avro or Parquet then those formats come with their own classes
which take care of serialization and deserialization.

For basic cases, this should be helpful:
https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-3/data-flow

More here on data storage:
http://stackoverflow.com/questions/2358402/where-hdfs-stores-files-locally-by-default
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#Data+Organization
https://developer.yahoo.com/hadoop/tutorial/module1.html#data

Regards,
Shahab


On Sat, Jul 19, 2014 at 12:12 AM, Adaryl "Bob" Wakefield, MBA <
adaryl.wakefi...@hotmail.com> wrote:

>   And by that I mean is there an HDFS file type? I feel like I’m missing
> something. Let’s say I have a HUGE json file that I import into HDFS. Does
> it retain it’s JSON format in HDFS? What if it’s just random tweets I’m
> streaming. Is it kind of like a normal disk where there are all kinds of
> files sitting on disk in their own format it’s just that in HDFS they are
> spread out over nodes?
>
> B.
>

Reply via email to