hey all,

I want to run a experimental cluster
but my machines have limited disk space capacity.
I want each node in my cluster to have
around 50,000 thousand blocks.

I don't want to have smaller the block size
(1K, 4K, etc).

I saw SimulatedFSDataset in HDFS code base.
Could anybody shed some light in how to use this
in a real cluster, i.e. a cluster with everything
the same but simulated block?

any hint is appreciated.

thanks a lot.
Thanh

Reply via email to